102
1 | Page BW IVIEW Here are some questions and answers. 1)name the two table that provide detail information about data source 2)how and when can you control whether repeat delta is requested? 3)how can you improve the performance of a query 4)how to prevent duplicate record in at the data target level 5)what is virtual cube? its significance 6)diff methods of generic datasource 7)how to connect anew data target to an existing data flow 8)what is partition 9) SAP batch process 10)how do you improve the info cube design preformance 12)is there any diff between repair run/repaire request.if yes then please tell me in detail 13)difference between process chain and infopackage group diff between partition/aggregate Answers Q 3) Query Performance can be improved by making the Aggregates having all the Chars & KF used in Query. Q 5) Virtual Cube : InfoProvider with transaction data that is not stored in the object itself, but which is read directly for analysis and reporting purposes. The relevant data can be from the BI system or from other SAP or non-SAP systems. VirtualProviders only allow read access to data. Q 6) Diff Methods of Generic datasource using Transaction RSO2 : a) Extraction from DB Table or View b) Extraction from SAP Query c) Extraction by Function Module 2) Important BW datasource relevant tables ROOSOURCE: Table Header for SAP BW OLTP Sources RODELTAM: BW Delta Process ROOSFIELD: DataSource Fields ROOSGEN: Generated Objects for OLTP Source, Last changed date and who etc. 3) For Q 8) i think you mean table partition You use partition to improve performance . You can only partiton on 0CALMONTH or 0FISCPER 4) 1. ROOSOURCE 6. Generic Extarction using 1.Views 2. Infoset Queries , 3. Function modules 5) Hi Santosh Pls note down the Q& ANS Some of the Real time question. Q) Under which menu path is the Test Workbench to be found, including in earlier Releases?

BW - Prepared

Embed Size (px)

Citation preview

Page 1: BW - Prepared

1 | P a g e B W I V I E W

Here are some questions and answers. 1)name the two table that provide detail information about data source 2)how and when can you control whether repeat delta is requested? 3)how can you improve the performance of a query 4)how to prevent duplicate record in at the data target level 5)what is virtual cube? its significance 6)diff methods of generic datasource 7)how to connect anew data target to an existing data flow 8)what is partition 9) SAP batch process 10)how do you improve the info cube design preformance 12)is there any diff between repair run/repaire request.if yes then please tell me in detail 13)difference between process chain and infopackage group diff between partition/aggregate Answers

Q 3) Query Performance can be improved by making the Aggregates having all the Chars & KF used in Query. Q 5) Virtual Cube : InfoProvider with transaction data that is not stored in the object itself, but which is read directly for analysis and reporting purposes. The relevant data can be from the BI system or from other SAP or non-SAP systems. VirtualProviders only allow read access to data. Q 6) Diff Methods of Generic datasource using Transaction RSO2 : a) Extraction from DB Table or View b) Extraction from SAP Query c) Extraction by Function Module 2) Important BW datasource relevant tables ROOSOURCE: Table Header for SAP BW OLTP Sources RODELTAM: BW Delta Process ROOSFIELD: DataSource Fields ROOSGEN: Generated Objects for OLTP Source, Last changed date and who etc. 3) For Q 8) i think you mean table partition You use partition to improve performance. You can only partiton on 0CALMONTH or 0FISCPER 4) 1. ROOSOURCE 6. Generic Extarction using 1.Views 2. Infoset Queries , 3. Function modules 5) Hi Santosh Pls note down the Q& ANS Some of the Real time question. Q) Under which menu path is the Test Workbench to be found, including in earlier Releases?

Page 2: BW - Prepared

2 | P a g e B W I V I E W

The menu path is: Tools - ABAP Workbench - Test - Test Workbench. Q) I want to delete a BEx query that is in Production system through request. Is anyone aware about it? A) Have you tried the RSZDELETE transaction? Q) Errors while monitoring process chains. A) During data loading. Apart from them, in process chains you add so many process types, for example after loading data into Info Cube, you rollup data into aggregates, now this rolling up of data into aggregates is a process type which you keep after the process type for loading data into Cube. This rolling up into aggregates might fail. Another one is after you load data into ODS, you activate ODS data (another process type) this might also fail. Q) In Monitor----- Details (Header/Status/Details) à Under Processing (data packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything OK ---- Simulate update. (Here we can debug update rules or transfer rules.) SM50 à Program/Mode à Program à Debugging & debug this work process. Q) PSA Cleansing. A) You know how to edit PSA. I don't think you can delete single records. You have to delete entire PSA data for a request. Q) Can we make a datasource to support delta. A) If this is a custom (user-defined) datasource you can make the datasource delta enabled. While creating datasource from RSO2, after entering datasource name and pressing create, in the next screen there is one button at the top, which says generic delta. If you want more details about this there is a chapter in Extraction book, it's in last pages u find out. Generic delta services: - Supports delta extraction for generic extractors according to: Time stamp Calendar day Numeric pointer, such as document number & counter Only one of these attributes can be set as a delta attribute.

Page 3: BW - Prepared

3 | P a g e B W I V I E W

Delta extraction is supported for all generic extractors, such as tables/views, SAP Query and function modules The delta queue (RSA7) allows you to monitor the current status of the delta attribute Pasted from <http://www.ittestpapers.com/articles/714/1/SAP-BW-Interview-Questions---Part-B/Page1.html> Q) Workbooks, as a general rule, should be transported with the role. Here are a couple of scenarios: 1. If both the workbook and its role have been previously transported, then the role does not need to be part of the transport. 2. If the role exists in both dev and the target system but the workbook has never been transported, and then you have a choice of transporting the role (recommended) or just the workbook. If only the workbook is transported, then an additional step will have to be taken after import: Locate the WorkbookID via Table RSRWBINDEXT (in Dev and verify the same exists in the target system) and proceed to manually add it to the role in the target system via Transaction Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding! 3. If the role does not exist in the target system you should transport both the role and workbook. Keep in mind that a workbook is an object unto itself and has no dependencies on other objects. Thus, you do not receive an error message from the transport of 'just a workbook' -- even though it may not be visible, it will exist (verified via Table RSRWBINDEXT). Overall, as a general rule, you should transport roles with workbooks. Q) How much time does it take to extract 1 million (10 lackhs) of records into an infocube? A. This depends, if you have complex coding in update rules it will take longer time, or else it will take less than 30 minutes. Q) What are the five ASAP Methodologies? A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support. 1. Project Preparation: In this phase, decision makers define clear project objectives and an efficient decision making process ( i.e. Discussions with the client, like what are his needs and requirements etc.). Project managers will be involved in this phase (I guess). A Project Charter is issued and an implementation strategy is outlined in this phase. 2. Business Blueprint: It is a detailed documentation of your company's requirements. (i.e. what are the objects we need to develop are modified depending on the client's requirements).

Page 4: BW - Prepared

4 | P a g e B W I V I E W

3. Realization: In this only, the implementation of the project takes place (development of objects etc) and we are involved in the project from here only. 4. Final Preparation: Final preparation before going live i.e. testing, conducting pre-go-live, end user training etc. End user training is given that is in the client site you train them how to work with the new environment, as they are new to the technology. 5. Go-Live & support: The project has gone live and it is into production. The Project team will be supporting the end users. Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not sure. Then Landscape of b/w: u have the development system, testing system, production system Development system: All the implementation part is done in this sys. (I.e., Analysis of objects developing, modification etc) and from here the objects are transported to the testing system, but before transporting an initial test known as Unit testing (testing of objects) is done in the development sys. Testing/Quality system: quality check is done in this system and integration testing is done. Production system: All the extraction part takes place in this sys. Q) How do you measure the size of infocube? A: In no of records. Q). Difference between infocube and ODS? A: Infocube is structured as star schema (extended) where a fact table is surrounded by different dim table that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes. No overwrite functionality ODS is a flat structure (flat table) with no star schema concept and which will have granular data (detailed level). Overwrite functionality. Flat file datasources does not support 0recordmode in extraction. x before, -after, n new, a add, d delete, r reverse Q) Difference between display attributes and navigational attributes?

Page 5: BW - Prepared

5 | P a g e B W I V I E W

A: Display attribute is one, which is used only for display purpose in the report. Where as navigational attribute is used for drilling down in the report. We don't need to maintain Navigational attribute in the cube as a characteristic (that is the advantage) to drill down. Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT? A: But how is it possible? If you load it manually twice, then you can delete it by requestID. Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL? Sure you can. ODS is nothing but a table. Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE? A) Yes of course. For example, for loading text and hierarchies we use different data sources but the same InfoSource. Q. BRIEF THE DATAFLOW IN BW. A) Data flows from transactional system to analytical system (BW). DataSources on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively. Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER RULES? Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS? FULL and DELTA. Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT? No LIS in LO cockpit. We will have datasources and can be maintained (append fields). Refer white paper on LO-Cockpit extractions. Pasted from <http://www.ittestpapers.com/articles/714/2/SAP-BW-Interview-Questions---Part-B/Page2.html> Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)? A) Initially we don't delete the setup tables but when we do change in extract structure we go for it. We r changing the extract structure right, that means there are some newly added fields in that which r not before. So to get the required data ( i.e.; the data which is required is taken and to avoid redundancy) we delete n then fill the setup tables. To refresh the statistical data. The extraction set up reads the dataset that you want to process such as, customers orders with the tables like VBAK, VBAP) & fills the relevant communication structure with the data. The data is stored in cluster tables from where it is read when the

Page 6: BW - Prepared

6 | P a g e B W I V I E W

initialization is run. It is important that during initialization phase, no one generates or modifies application data, at least until the tables can be set up. Q) SIGNIFICANCE of ODS? It holds granular data (detailed level). Q) WHERE THE PSA DATA IS STORED? In PSA table. Q) WHAT IS DATA SIZE? The volume of data one data target holds (in no. of records) Q) Different types of INFOCUBES. Basic, Virtual (remote, sap remote and multi) Virtual Cube is used for example, if you consider railways reservation all the information has to be updated online. For designing the Virtual cube you have to write the function module that is linking to table, Virtual cube it is like a the structure, when ever the table is updated the virtual cube will fetch the data from table and display report Online... FYI.. you will get the information : https://www.sdn.sap.com/sdn/index.sdn and search for Designing Virtual Cube and you will get a good material designing the Function Module Q) INFOSET QUERY. Can be made of ODS's and Characteristic InfoObjects with masterdata. Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE. In R/3 or in BW? 2 in R/3 and 2 in BW Q) ROUTINES? Exist in the InfoObject, transfer routines, update routines and start routine Q) BRIEF SOME STRUCTURES USED IN BEX. Rows and Columns, you can create structures. Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX? Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic values. Variable Types are Manual entry /default value

Page 7: BW - Prepared

7 | P a g e B W I V I E W

Replacement path SAP exit Customer exit Authorization Q) HOW MANY LEVELS YOU CAN GO IN REPORTING? You can drill down to any level by using Navigational attributes and jump targets. Q) WHAT ARE INDEXES? Indexes are data base indexes, which help in retrieving data fastly. Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS. Help! Refer documentation Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED? No. Q) WHAT IS THE SIGNIFICANCE OF KPI'S? KPI's indicate the performance of a company. These are key figures Q) AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION. After image (correct me if I am wrong) Q) REPORTING AND RESTRICTIONS. Help! Refer documentation. Q) TOOLS USED FOR PERFORMANCE TUNING. ST22, Number ranges, delete indexes before load. Etc Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY. There should be some tool to run the job daily (SM37 jobs) Q) AUTHORIZATIONS. Profile generator Q) WEB REPORTING. What are you expecting?? Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.

Page 8: BW - Prepared

8 | P a g e B W I V I E W

Of course Q) PROCEDURES OF REPORTING ON MULTICUBES Refer help. What are you expecting? MultiCube works on Union condition Q) EXPLAIN TRANPSORTATION OF OBJECTS? Dev---àQ and Dev-------àP Q) What types of partitioning are there for BW? There are two Partitioning Performance aspects for BW (Cube & PSA) Query Data Retrieval Performance Improvement: Partitioning by (say) Date Range improves data retrieval by making best use of database [data range] execution plans and indexes (of say Oracle database engine). B) Transactional Load Partitioning Improvement: Partitioning based on expected load volumes and data element sizes. Improves data loading into PSA and Cubes by infopackages (Eg. without timeouts). Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are there any standard procedures for checking them or matching the number of records? A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of records extracted. Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same & also in the monitor header tab. A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems in R/3. It is simple to use, but only really tells you if the extractor works. Since records that get updated into Cubes/ODS structures are controlled by Update Rules, you will not be able to determine what is in the Cube compared to what is in the R/3 environment. You will need to compare records on a 1:1 basis against records in R/3 transactions for the functional area in question. I would recommend enlisting the help of the end user community to assist since they presumably know the data. To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will see the record count, you can also go to display that data. You are not modifying anything so what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you how many records should be expected in BW for a given load. You have that information in the monitor RSMO during and after data loads. From RSMO for a given load you can determine how many records were passed through the transfer rules from R/3, how many targets were updated, and how many records passed through the Update Rules. It also gives you error messages from the PSA. Q) Types of Transfer Rules? A) Field to Field mapping, Constant, Variable & routine.

Page 9: BW - Prepared

9 | P a g e B W I V I E W

Q) Types of Update Rules? A) (Check box), Return table Q) Transfer Routine? A) Routines, which we write in, transfer rules. Q) Update Routine? A) Routines, which we write in Update rules Q) What is the difference between writing a routine in transfer rules and writing a routine in update rules? A) If you are using the same InfoSource to update data in more than one data target its better u write in transfer rules because u can assign one InfoSource to more than one data target & and what ever logic u write in update rules it is specific to particular one data target. Q) Routine with Return Table. A) Update rules generally only have one return value. However, you can create a routine in the tab strip key figure calculation, by choosing checkbox Return table. The corresponding key figure routine then no longer has a return value, but a return table. You can then generate as many key figure values, as you like from one data record. Q) Start routines? A) Start routines u can write in both updates rules and transfer rules, suppose you want to restrict (delete) some records based on conditions before getting loaded into data targets, then you can specify this in update rules-start routine. Ex: - Delete Data_Package ani ante it will delete a record based on the condition Q) X & Y Tables? X-table = A table to link material SIDs with SIDs for time-independent navigation attributes. Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes. There are four types of sid tables X time independent navigational attributes sid tables

Page 10: BW - Prepared

10 | P a g e B W I V I E W

Y time dependent navigational attributes sid tables H hierarchy sid tables I hierarchy structure sid tables Q) Filters & Restricted Key figures (real time example) Restricted KF's u can have for an SD cube: billed quantity, billing value, no: of billing documents as RKF's. Q) Line-Item Dimension (give me an real time example) Line-Item Dimension: Invoice no: or Doc no: is a real time example Q) What does the number in the 'Total' column in Transaction RSA7 mean? A) The 'Total' column displays the number of LUWs that were written in the delta queue and that have not yet been confirmed. The number includes the LUWs of the last delta request (for repetition of a delta request) and the LUWs for the next delta request. A LUW only disappears from the RSA7 display when it has been transferred to the BW System and a new delta request has been received from the BW System. Q) How to know in which table (SAP BW) contains Technical Name / Description and creation data of a particular Reports. Reports that are created using BEx Analyzer. A) There is no such table in BW if you want to know such details while you are opening a particular query press properties button you will come to know all the details that you wanted. You will find your information about technical names and description about queries in the following tables. Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR) for workbooks and the connections to queries check Where- used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT) Q) What is a LUW in the delta queue? A) A LUW from the point of view of the delta queue can be an individual document, a group of documents from a collective run or a whole data packet of an application extractor.

Page 11: BW - Prepared

11 | P a g e B W I V I E W

Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7 differ from the number of data records that is displayed when you call the detail view? A) The number on the overview screen corresponds to the total of LUWs (see also first question) that were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the records contained in the LUWs. Both, the records belonging to the previous delta request and the records that do not meet the selection conditions of the preceding delta init requests are filtered out. Thus, only the records that are ready for the next delta request are displayed on the detail screen. In the detail screen of Transaction RSA7, a possibly existing customer exit is not taken into account. Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta loading? A) Only when a new delta has been requested does the source system learn that the previous delta was successfully loaded to the BW System. Then, the LUWs of the previous delta may be confirmed (and also deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the number on the overview screen does not change when the first delta was loaded to the BW System. Q) Why are selections not taken into account when the delta queue is filled? A) Filtering according to selections takes place when the system reads from the delta queue. This is necessary for reasons of performance. Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has also been loaded successfully? It is most likely that this is a DataSource that does not send delta data to the BW System via the delta queue but directly via the extractor (delta for master data using ALE change pointers). Such a DataSource should not be displayed in RSA7. This error is corrected with BW 2.0B Support Package 11. Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading procedure from the delta queue? A) The impact is limited. If performance problems are related to the loading process from the delta queue, then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area and so on). Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for the delta queue as for a full update. Please note, however, that LUWs are not split during data loading for consistency reasons. This means that when very large LUWs are written to the DeltaQueue, the actual package size may differ considerably from the MAXSIZE and MAXLINES parameters.

Page 12: BW - Prepared

12 | P a g e B W I V I E W

Q) Why does it take so long to display the data in the delta queue (for example approximately 2 hours)? A) With Plug In 2001.1 the display was changed: the user has the option of defining the amount of data to be displayed, to restrict it, to selectively choose the number of a data record, to make a distinction between the 'actual' delta data and the data intended for repetition and so on. Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What exactly is deleted? A) You should act with extreme caution when you use the deletion function in the delta queue. It is comparable to deleting an InitDelta in the BW System and should preferably be executed there. You do not only delete all data of this DataSource for the affected BW System, but also lose the entire information concerning the delta initialization. Then you can only request new deltas after another delta initialization. When you delete the data, the LUWs kept in the qRFC queue for the corresponding target system are confirmed. Physical deletion only takes place in the qRFC outbound queue if there are no more references to the LUWs. The deletion function is for example intended for a case where the BW System, from which the delta initialization was originally executed, no longer exists or can no longer be accessed. Pasted from <http://www.ittestpapers.com/articles/714/3/SAP-BW-Interview-Questions---Part-B/Page3.html> Q) Why does it take so long to delete from the delta queue (for example half a day)? A) Import PlugIn 2000.2 patch 3. With this patch the performance during deletion is considerably improved. Q) Why is the delta queue not updated when you start the V3 update in the logistics cockpit area? A) It is most likely that a delta initialization had not yet run or that the delta initialization was not successful. A successful delta initialization (the corresponding request must have QM status 'green' in the BW System) is a prerequisite for the application data being written in the delta queue. Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)? A) The qRFC monitor basically displays the same data as RSA7. The internal queue name must be used for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and the short name of the DataSource. For DataSources whose name are 19 characters long or shorter, the short name corresponds to the name of the DataSource. For DataSources whose name is longer than 19 characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN.

Page 13: BW - Prepared

13 | P a g e B W I V I E W

In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover, the data of a LUW is displayed in an unstructured manner there. Q) Why are the data in the delta queue although the V3 update was not started? A) Data was posted in background. Then, the records are updated directly in the delta queue (RSA7). This happens in particular during automatic goods receipt posting (MRRS). There is no duplicate transfer of records to the BW system. See Note 417189. Q) Why does button 'Repeatable' on the RSA7 data details screen not only show data loaded into BW during the last delta but also data that were newly added, i.e. 'pure' delta records? A) Was programmed in a way that the request in repeat mode fetches both actually repeatable (old) data and new data from the source system. Q) I loaded several delta inits with various selections. For which one is the delta loaded? A) For delta, all selections made via delta inits are summed up. This means, a delta for the 'total' of all delta initializations is loaded. Q) How many selections for delta inits are possible in the system? A) With simple selections (intervals without complicated join conditions or single values), you can make up to about 100 delta inits. It should not be more. With complicated selection conditions, it should be only up to 10-20 delta inits. Reason: With many selection conditions that are joined in a complicated way, too many 'where' lines are generated in the generated ABAP source code that may exceed the memory limit. Q) I intend to copy the source system, i.e. make a client copy. What will happen with may delta? Should I initialize again after that? A) Before you copy a source client or source system, make sure that your deltas have been fetched from the DeltaQueue into BW and that no delta is pending. After the client copy, an inconsistency might occur between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the system copy, the table will contain the entries with the old logical system name that are no longer useful for further delta loading from the new logical system. The delta must be initialized in any case since delta depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you should expect that the delta have to be initialized after the copy. Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of processes?

Page 14: BW - Prepared

14 | P a g e B W I V I E W

A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queues only after informing the BW Support or only if this is explicitly requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'. Q) Despite of the delta request being started after completion of the collective run (V3 update), it does not contain all documents. Only another delta request loads the missing documents into BW. What is the cause for this "splitting"? A) The collective run submits the open V2 documents for processing to the task handler, which processes them in one or several parallel update processes in an asynchronous way. For this reason, plan a sufficiently large "safety time window" between the end of the collective run in the source system and the start of the delta request in BW. An alternative solution where this problem does not occur is described in Note 505700. Q) Despite my deleting the delta init, LUWs are still written into the DeltaQueue? A) In general, delta initializations and deletions of delta inits should always be carried out at a time when no posting takes place. Otherwise, buffer problems may occur: If a user started the internal mode at a time when the delta initialization was still active, he/she posts data into the queue even though the initialization had been deleted in the meantime. This is the case in your system. Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which values in the field 'Status' mean what and which values are correct and which are alarming? Are the statuses BW-specific or generally valid in qRFC? A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a delta request or in a repetition of the delta request. However, this does not mean that the record has successfully reached the BW yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written into the DeltaQueue and will be loaded into the BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY and RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur temporarily. It is set before starting a DeltaExtraction for all records with status READ present at that time. The records with status EXECUTED are usually deleted from the queue in packages within a delta request directly after setting the status before extracting a new delta. If you see such records, it means that either a process which is confirming and deleting records which have been loaded into the BW is successfully running at the moment, or, if the records remain in the table for a longer period of time with status EXECUTED, it is likely that there are problems with deleting the records which have already been successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every other status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means nothing (see note 378903). The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.

Page 15: BW - Prepared

15 | P a g e B W I V I E W

Q) The extract structure was changed when the DeltaQueue was empty. Afterwards new delta records were written to the DeltaQueue. When loading the delta into the PSA, it shows that some fields were moved. The same result occurs when the contents of the DeltaQueue are listed via the detail display. Why are the data displayed differently? What can be done? Make sure that the change of the extract structure is also reflected in the database and that all servers are synchronized. We recommend to reset the buffers using Transaction $SYNC. If the extract structure change is not communicated synchronously to the server where delta records are being created, the records are written with the old structure until the new structure has been generated. This may have disastrous consequences for the delta. When the problem occurs, the delta needs to be re-initialized. Q) How and where can I control whether a repeat delta is requested? A) Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will be of type 'Repeat'. If you need to repeat the last load for certain reasons, set the request in the monitor to red manually. For the contents of the repeat see Question 14. Delta requests set to red despite of data being already updated lead to duplicate records in a subsequent repeat, if they have not been deleted from the data targets concerned before. Q) As of PI 2003.1, the Logistic Cockpit offers various types of update methods. Which update method is recommended in logistics? According to which criteria should the decision be made? How can I choose an update method in logistics? See the recommendation in Note 505700. Q) Are there particular recommendations regarding the data volume the DeltaQueue may grow to without facing the danger of a read failure due to memory problems? A) There is no strict limit (except for the restricted number range of the 24-digit QCOUNT counter in the LUW management table - which is of no practical importance, however - or the restrictions regarding the volume and number of records in a database table). When estimating "smooth" limits, both the number of LUWs is important and the average data volume per LUW. As a rule, we recommend to bundle data (usually documents) already when writing to the DeltaQueue to keep number of LUWs small (partly this can be set in the applications, e.g. in the Logistics Cockpit). The data volume of a single LUW should not be considerably larger than 10% of the memory available to the work process for data extraction (in a 32-bit architecture with a memory volume of about 1GByte per work process, 100 Mbytes per LUW should not be exceeded). That limit is of rather small practical importance as well since a comparable limit already applies when writing to the DeltaQueue. If the limit is observed, correct reading is guaranteed in most cases. If the number of LUWs cannot be reduced by bundling application transactions, you should at least make sure that the data are fetched from all connected BWs as quickly as possible. But for other, BW-specific, reasons, the frequency should not be higher than one DeltaRequest per hour.

Page 16: BW - Prepared

16 | P a g e B W I V I E W

To avoid memory problems, a program-internal limit ensures that never more than 1 million LUWs are read and fetched from the database per DeltaRequest. If this limit is reached within a request, the DeltaQueue must be emptied by several successive DeltaRequests. We recommend, however, to try not to reach that limit but trigger the fetching of data from the connected BWs already when the number of LUWs reaches a 5-digit value. Q) I would like to display the date the data was uploaded on the report. Usually, we load the transactional data nightly. Is there any easy way to include this information on the report for users? So that they know the validity of the report. A) If I understand your requirement correctly, you want to display the date on which data was loaded into the data target from which the report is being executed. If it is so, configure your workbook to display the text elements in the report. This displays the relevance of data field, which is the date on which the data load has taken place. Q) Can we filter the fields at Transfer Structure? Q) Can we load data directly into infoobject with out extraction is it possible. Yes. We can copy from other infoobject if it is same. We load data from PSA if it is already in PSA. Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY AND MONTHLY. a) We can set the time. Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS. THROUGH WHICH NETWORK. a) VPN…………….Virtual Private Network, VPN is nothing but one sort of network where we can connect to the client systems sitting in offshore through RAS (Remote access server). Q) HOW CAN U ANALIZE THE PROJECT AT FIRST? Prepare Project Plan and Environment Define Project Management Standards and Procedures Define Implementation Standards and Procedures Testing & Go-live + supporting.

Page 17: BW - Prepared

17 | P a g e B W I V I E W

Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR? Go to TCode sm66 then see which one is locked select that pid from there and goto sm12 TCode then unlock it this is happened when lock errors are occurred when u scheduled. Q) Can anybody tell me how to add a navigational attribute in the BEx report in the rows? A) Expand dimension under left side panel (that is infocube panel) select than navigational attributes drag and drop under rows panel. Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT. In current systems (BW 3.0B and R/3 4.6B) these Tcodes don't exist! Q) WHAT IS TRANSACTIONAL CUBE? A) Transactional InfoCubes differ from standard InfoCubes in that the former have an improved write access performance level. Standard InfoCubes are technically optimized for read-only access and for a comparatively small number of simultaneous accesses. Instead, the transactional InfoCube was developed to meet the demands of SAP Strategic Enterprise Management (SEM), meaning that, data is written to the InfoCube (possibly by several users at the same time) and re-read as soon as possible. Standard Basic cubes are not suitable for this. Q) Is there any way to delete cube contents within update rules from an ODS data source? The reason for this would be to delete (or zero out) a cube record in an "Open Order" cube if the open order quantity was 0. I've tried using the 0recordmode but that doesn't work. Also, would it be easier to write a program that would be run after the load and delete the records with a zero open qty? A) START routine for update rules u can write ABAP code. A) Yap, you can do it. Create a start routine in Update rule. It is not "Deleting cube contents with update rules" It is only possible to avoid that some content is updated into the InfoCube using the start routine. Loop at all the records and delete the record that has the condition. "If the open order quantity was 0" You have to think also in before and after images in case of a delta upload. In that case you may delete the change record and keep the old and after the change the wrong information. Q) I am not able to access a node in hierarchy directly using variables for reports. When I am using Tcode RSZV it is giving a message that it doesn't exist in BW 3.0 and it is embedded in BEx. Can any one tell me the other options to get the same functionality in BEx?

Page 18: BW - Prepared

18 | P a g e B W I V I E W

A) Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards, it's possible in the Query Designer (BEx) itself. Just right click on the InfoObject for which you want to use as variables and precede further selecting variable type and processing types. Q) Wondering how can I get the values, for an example, if I run a report for month range 01/2004 - 10/2004 then monthly value is actually divide by the number of months that I selected. Which variable should I use? Q) Why is it every time I switch from Info Provider to InfoObject or from one item to another while in modeling I always get this message " Reading Data " or "constructing workbench" in it runs for minutes.... anyway to stop this? Q) Can any one give me info on how the BW delta works also would like to know about 'before image and after image' am currently in a BW project and have to write start routines for delta load. Q) I am very new to BW. I would like to clarify a doubt regarding Delta extractor. If I am correct, by using delta extractors the data that has already been scheduled will not be uploaded again. Say for a specific scenario, Sales. Now I have uploaded all the sales order created till yesterday into the cube. Now say I make changes to any of the open record, which was already uploaded. Now what happens when I schedule it again? Will the same record be uploaded again with the changes or will the changes get affected to the previous record. A) Q) In BW we need to write abap routines. I wish to know when and what type of abap routines we got to write. Also, are these routines written in update rules? I will be glad, if this is clarified with real-time scenarios and few examples? A) Over here we write our routines in the start routines in the update rules or in the transfer structure (you can choose between writing them in the start routines or directly behind the different characteristics. In the transfer structure you just click on the yellow triangle behind a characteristic and choose "routine". In the update rules you can choose "start routine" or click on the triangle with the green square behind an individual characteristic. Usually we only use start routine when it does not concern one single characteristic (for example when you have to read the same table for 4 characteristics). I hope this helps. We used ABAP Routines for example: To convert to Uppercase (transfer structure) To convert Values out of a third party tool with different keys into the same keys as our SAP System uses (transfer structure)

Page 19: BW - Prepared

19 | P a g e B W I V I E W

To select only a part of the data for from an infosource updating the InfoCube (Start Routine) etc. Pasted from <http://www.ittestpapers.com/articles/714/4/SAP-BW-Interview-Questions---Part-B/Page4.html> Q) What is ODS? A) An ODS object acts as a storage location for consolidated and cleaned-up transaction data (transaction data or master data, for example) on the document (atomic) level. This data can be evaluated using a BEx query. Standard ODS Object Transactional ODS object: The data is immediately available here for reporting. For implementation, compare with the Transactional ODS Object. A transactional ODS object differs from a standard ODS object in the way it prepares data. In a standard ODS object, data is stored in different versions ((new) delta, active, (change log) modified), where as a transactional ODS object contains the data in a single version. Therefore, data is stored in precisely the same form in which it was written to the transactional ODS object by the application. In BW, you can use a transaction ODS object as a data target for an analysis process. The transactional ODS object is also required by diverse applications, such as SAP Strategic Enterprise Management (SEM) for example, as well as other external applications. Transactional ODS objects allow data to be available quickly. The data from this kind of ODS object is accessed transactionally, that is, data is written to the ODS object (possibly by several users at the same time) and reread as soon as possible. It offers no replacement for the standard ODS object. Instead, an additional function displays those that can be used for special applications. The transactional ODS object simply consists of a table for active data. It retrieves its data from external systems via fill- or delete- APIs. The loading process is not supported by the BW system. The advantage to the way it is structured is that data is easy to access. They are made available for reporting immediately after being loaded. Q) What does InfoCube contains? A) Each InfoCube has one FactTable & a maximum of 16 (13+3 system defined, time, unit & data packet) dimensions. Q) What does FACT Table contain?

Page 20: BW - Prepared

20 | P a g e B W I V I E W

A FactTable consists of KeyFigures. Each Fact Table can contain a maximum of 233 key figures. Dimension can contain up to 248 freely available characteristics. Q) How many dimensions are in a CUBE? A) 16 dimensions. (13 user defined & 3 system pre-defined [time, unit & data packet]) Q) What does SID Table contain? SID keys linked with dimension table & master data tables (attributes, texts, hierarchies) Q) What does ATTRIBUTE Table contain? Master attribute data Q) What does TEXT Table contain? Master text data, short text, long text, medium text & language key if it is language dependent Q) What does Hierarchy table contain? Master hierarchy data Q) What is the advantage of extended STAR Schema? Q). Differences between STAR Schema & Extended Schema? A) In STAR SCHEMA, A FACT Table in center, surrounded by dimensional tables and the dimension tables contains of master data. In Extended Schema the dimension tables does not contain master data, instead they are stored in Masterdata tables divided into attributes, text & hierarchy. These Masterdata & dimensional tables are linked with each other with SID keys. Masterdata tables are independent of Infocube & reusability in other InfoCubes. Q) As to where in BW do you go to add a character like a \; # so that BW will accept it. This is transaction data which loads fine in the PSA but not the data target. A) Check transaction SPRO ---Then click the "Goggles"-Button => Business

Page 21: BW - Prepared

21 | P a g e B W I V I E W

Information Warehouse => Global Settings => 2nd point in the list. I hope you can use my "Guide" (my BW is in german, so i don't know all the english descriptions). Q) Does data packets exits even if you don't enter the master data, (when created)? Q) When are Dimension ID's created? A) When Transaction data is loaded into InfoCube. Q) When are SID's generated? A) When Master data loaded into Master Tables (Attr, Text, Hierarchies). Q) How would we delete the data in ODS? A) By request IDs, Selective deletion & change log entry deletion. Q) How would we delete the data in change log table of ODS? A) Context menu of ODS → Manage → Environment → change log entries. Q) What are the extra fields does PSA contain? A) (4) Record id, Data packet … Q) Partitioning possible for ODS? A) No, It's possible only for Cube. Q) Why partitioning? A) For performance tuning. Q) Have you ever tried to load data from 2 InfoPackages into one cube? A) Yes. Q) Different types of Attributes?

Page 22: BW - Prepared

22 | P a g e B W I V I E W

A) Navigational attribute, Display attributes, Time dependent attributes, Compounding attributes, Transitive attributes, Currency attributes. Q) Transitive Attributes? A) Navigational attributes having nav attr…these nav attrs are called transitive attrs Q) Navigational attribute? A) Are used for drill down reporting (RRI). Q) Display attributes? A) You can show DISPLAY attributes in a report, which are used only for displaying. Q) How does u recognize an attribute whether it is a display attribute or not? A) In Edit characteristics of char, on general tab checked as attribute only. Q) Compounding attribute? A) Q) Time dependent attributes? A) Q) Currency attributes? A) Q) Authorization relevant object. Why authorization needed? A) Q) How do we convert Master data InfoObject to a Data target? A) InfoArea → Infoprovider (context menu) → Insert characteristic Data as DataTarget. Q) How do we load the data if a FlatFile consists of both Master and Transaction data?

Page 23: BW - Prepared

23 | P a g e B W I V I E W

A) Using Flexible update method while creating InfoSource. Q) Steps in LIS are Extraction? A) Q) Steps in LO are Extraction? A) * Maintain extract structures. (R/3) * Maintain DataSources. (R/3) * Replicate DataSource in BW. * Assign InfoSources. * Maintain communication structures/transfer rules. * Maintain InfoCubes & Update rules. * Activate extract structures. (R/3) * Delete setup tables/setup extraction. (R/3) * InfoPackage for the Delta initialization. * Set-up periodic V3 update. (R/3) * InfoPackage for Delta uploads. Q) Steps in FlatFile Extraction? A) Q) Different types in LO's? A) Direct Delta, Queued Delta, Serialized V3 update, Unserialized V3 Update. Direct Delta: - With every document posted in R/3, the extraction data is transferred directly into the BW delta queue. Each document posting with delta extraction becomes exactly one LUW in the corresponding Delta queue. Queued Delta: - The extraction data from the application is collected in extraction queue instead of as update data and can be transferred to the BW delta queue by an update collection run, as in the V3 update.

Page 24: BW - Prepared

24 | P a g e B W I V I E W

Q) What does LO Cockpit contain? A) * Maintaining Extract structure. * Maintaining DataSources. * Activating Updates. * Controlling Updates. Q) RSA6 --- Maintain DataSources. Q) RSA7 ---- Delta Queue (allows you to monitor the current status of the delta attribute) Q) RSA3 ---- Extract checker. Q) LBW0 --- TCode for LIS. Q) LBWG --- Delete set-up tables in LO's. Q) OLI*BW --- Fill Set-up tables. Q) LBWE ---- TCode for Logistics extractors. Q) RSO2 --- Maintaining Generic DataSources. Pasted from <http://www.ittestpapers.com/articles/714/5/SAP-BW-Interview-Questions---Part-B/Page5.html> Q) MC21 ----- Creating user-defined Information Structure for LIS (It is InfoSource in SAP BW). Q) MC24 ---- Creating Updating rules for LO's. Q) PFCG ---- Role maintenance, assign users to these roles. Q) SE03 -- Changeability of the BW namespace.

Page 25: BW - Prepared

25 | P a g e B W I V I E W

Q) RSDCUBEM --- For Delete, Change or Delete the InfoCube. Q) RSD5 -- Data packet characteristics maint. Q) RSDBC - DB Connect Q) RSMO --- Monitoring of Dataloads. Q) RSCUSTV6 -- Partitioning of PSA. Q) RSRT -- Query monitor. Q) RSRV - Analysis and Repair of BW Objects Q) RRMX - BEx Analyzer Q) RSBBS - Report to Report interface (RRI). Q) SPRO -- IMG (To make configurations in BW). Q) RSDDV - Maintaining Aggregates. Q) RSKC -- Character permit checker. Q) ST22 - Checking ShortDump. Q) SM37 - Scheduling Background jobs. Q) RSBOH1 -- Open Hub Service: Create InfoSpoke. Q) RSMONMESS -- "Messages for the monitor" table. Q) ROOSOURCE - Table to find out delta update methods.

Page 26: BW - Prepared

26 | P a g e B W I V I E W

Q) RODELTAM - Finding for modes of records (i.e. before image & after image) Q) SMOD - Definition Q) CMOD - Project Management enhancing Q) SPAU - Program Compare Q) SE11 - ABAP Dictionary Q) SE09 - Transport Organizer (workbench organizer) Q) SE10 - Transport Organizer (Extended View) Q) SBIW - Implementation guide Q) Statistical Update? A) Q) What are Process Chains? A) TCode is RSPC, is a sequence of processes scheduled in the background & waiting to be triggered by a specific event. Process chains nothing but grouping processes. Process variant (start variant) is the place the process chain knows where to start. There should be min and max one start variant in each process chain, here we specify when should the process chain start by giving date and time or if you want to start immediately Some of theses processes trigger an event of their own that in-turn triggers other processes. Ex: - Start chain → Delete BCube indexes → Load data from the source system to PSA → Load data from PSA to DataTarget ODS → Load data from ODS to BCube → Create Indexes for BCube after loading data → Create database statistics → Roll-Up data into the aggregate → Restart chain from beginning. Q) What are Process Types & Process variant? A) Process types are General services, Load Process & subsequent processing, Data Target Administration, Reporting agent & Other BW services. Process variant (start variant) is the place the process type knows when & where to start.

Page 27: BW - Prepared

27 | P a g e B W I V I E W

Q) Difference between MasterData & Transaction InfoPackage? A) 5 tabs in Masterdata & 6 tabs in Transaction data, the extra tab in Transaction data is DATA TARGETS. Q) Types of Updates? A) Full Update, Init Delta Update & Delta Update. Q) For Full update possible while loading data from R/3? A) InfoPackage → Scheduler → Repair Request flag (check). This is only possible when we use MM & SD modules. Q) InfoPackage groups? A) Q) Explain the status of records in Active & change log tables in ODS when modified in source system? A) Q) Why it takes more time while loading the transaction data even to load the transaction without master data (we check the checkbox, Always Update data, even if no master data exits for the data)? A) Because while loading the data it has to create SID keys for transaction data. Q) For what we use HIDE fields, SELECT fields & CANCELLATION fields? A) Selection fields-- The only purpose is when we check this column, the field will appear in InfoPackage Data selection tab. Hide fields -- These fields are not transferred to BW transfer structure. Cancellation - It will reverse the posted documents of keyfigures of customer defined by multiplying it with -1...and nullifying the value. I think this is reverse posting Q) Transporting.

Page 28: BW - Prepared

28 | P a g e B W I V I E W

A) When it comes to transporting for R/3 and BW, u should always transport all the R/3 Objects first………once you transport all the R/3 objects to the 2nd system, you have to replicate the datasources into the 2nd BW system…and then transport BW objects. First you will transport all the datasources from 1st R/3 system to 2nd R/3 System. Second, you will replicate the datasources from 2nd R/3 system into 2nd BW system. Third, you will transport all the BW Objects from 1st BW system to 2nd BW system. You have to send your extractors first to the corresponding R/3 Q Box and replicate that to BW. Then you have to do this transport in BW. Development, testing and then production Q) Functionality of InitDelta & Delta Updates? A) Q) What is Change-run ID? A) Q) Currency conversions? A) Q) Difference between Calculated KeyFigure & Formula? A) Q) When does a transfer structure contain more fields than the communication structure of an InfoSource? A) If we use a routine to enhance a field in the communication from several fields in the transfer structure, the communication structure may contain more fields. A) The total no of InfoObjects in the communication structure & Extract structure may be different, since InfoObjects can be copied to the communication structure from all the extract structures. Q) What is the PSA, technical name of PSA, Uses?

Page 29: BW - Prepared

29 | P a g e B W I V I E W

A) When we want to delete the data in InfoProvider & again want to re-load the data, at this stage we can directly load from PSA not going to extract from R/3. A) For cleansing purpose. Q) Variables in Reporting? A) Characteristics values, Text, Hierarchies, Hierarchy nodes & Formula elements, Q) Variable processing types in Reporting? A) Manual, Replacement path, SAP Exit, Authorizations, Customer Exit Q) Why we use this RSRP0001 Enhancement? A) For enhancing the Customer Exit in reporting. Q) What is the use of Filters? A) It Restricts Data. Q) What is the use of Conditioning? A) To retrieve data based on particular conditions like less than, greater than, less than or equal etc., Q) Difference between Filters & Conditioning? A) Q) What is NODIM? A) For example it converts 5lts + 5kgs = 10. Q) What for Exception's? How can I get PINK color? A) To differentiate values with colors, by adding relevant colors u can get pink. Q) Why SAPLRSAP is used? A) We use these function modules for enhancing in r/3.

Page 30: BW - Prepared

30 | P a g e B W I V I E W

Q) What are workbooks & uses? A) Q) Where are workbooks saved? A) Workbooks are saved in favorites. Q) Can Favorites accessed by other users? A) No, they need authorization. Q) What is InfoSet? A) An InfoSet is a special view of a dataset, such as logical database, table join, table, and sequential file, and is used by SAP Query as a source data. InfoSets determine the tables or fields in these tables that can be referenced by a report. In most cases, InfoSets are based on logical databases. SAP Query includes a component for maintaining InfoSets. When you create an InfoSet, a DataSource in an application system is selected. Navigating in a BW to an InfoSet Query, using one or more ODS objects or InfoObjects. You can also drill-through to BEx queries and InfoSet Queries from a second BW system, that is Connected as a data mart. _The InfoSet Query functions allow you to report using flat data tables (master data reporting). Choose InfoObjects or ODS objects as data sources. These can be connected using joins. __You define the data sources in an InfoSet. An InfoSet can contain data from one or more tables that are connected to one another by key fields. __The data sources specified in the InfoSet form the basis of the InfoSet Query. Pasted from <http://www.ittestpapers.com/articles/714/6/SAP-BW-Interview-Questions---Part-B/Page6.html> Q) LO's? A)

Page 31: BW - Prepared

31 | P a g e B W I V I E W

Synchronous update (V1 update) Statistics update is carried out at the same time as the document update in the same task. • Asynchronous update (V2 update) Document update and the statistics update take place separately in different tasks. • Collective update (V3 update) Again, document update is separate from the statistics update. However, in contrast to the V2 update, the V3 collective statistics update must be scheduled as a job. Successfully scheduling the update will ensure that all the necessary information Structures are properly updated when new or existing documents are processed. Scheduling intervals should be based on the amount of activity on a particular OLTP system. For example, a development system with a relatively low or no volume of new documents may only need to run the V3 update on a weekly basis. A full production environment, with hundreds of transactions per hour may have to be updated every 15 to 30 minutes. SAP standard background job scheduling functionality may be used in order to schedule the V3 updates successfully. It is possible to verify that all V3 updates are successfully completed via transaction SM13. This transaction will take you to the "UPDATE RECORDS: MAIN MENU" screen. At this screen, enter asterisk as your user (for all users), flag the radio button 'All' and hit enter. Any outstanding V3 updates will be listed. While a non-executed V3 update will not hinder your OLTP system, by administering the V3 update jobs properly, your information structures will be current and overall performance will be improved. COMPENDIT MAKES NO REPRESENTATIONS ABOUT THE SUITABILITY OF THE INFORMATION CONTAINED IN THE DOCUMENTS AND RELATED GRAPHICS PUBLISHED ON THIS SERVER FOR ANY PURPOSE. ALL SUCH DOCUMENTS AND RELATED GRAPHICS ARE PROVIDED "AS IS" WITHOUT WARRANTY HAVE ANY KIND. Business Content Business Content is the umbrella term for the preconfigured BW objects delivered by SAP. These objects provide ready-made solutions to basic business information requirements and are used to accelerate the implementation of a BW. Business Content includes:

Page 32: BW - Prepared

32 | P a g e B W I V I E W

R/3 extractor programs, DataSources, InfoObjects, InfoSources, InfoCubes, Queries, Roles, and Workbooks. From the Grouping menu, choose the additional Business Content objects that you want to include. Groupings gather together all of the objects from a single area: Only Necessary Objects: Only those additional objects that are needed to activate the objects that you have selected are included (minimum selection). In Data Flow Before: All the objects that pass data on to another object are collected. In Data Flow Afterwards: All the objects that receive data from another object are collected. In Data Flow Before and Afterwards: All the objects that both deliver and receive data are collected. Backup for System Copy: You use this setting to collect some of the objects into a transport request. This request can be added again after a system copy has been made. Q) I found 0fiscyear has no master data table, and 0fiscper has master data table t009. Does anyone know how the system gets the data for these 2 info objects? A) From context menu of source system à transfer global settings, based on FISCVARNT you can take data for 0FISCYEAR and 0FISCPER Q) I am facing an odd problem in my dev box. I am using 0FIAP_O03 to load 0FIAP_C03 (InfoSource 0FI_AP_4 loads the ODS). I have loaded the ODS with R/3 data without any problem. I saw the data in New Data table and then after activation I am able to see the data in Active Data table and Change Log table. Now, when I want to do the delta initialization in the cube using ODS data, the request fails miserably. In fact, all the update rules are active between ODS and cube and all of them are one to one mappings (not even a single ABAP routine in UpdateRules). If the cube and the corresponding update rules are active and the data loads immaculately perfect until ODS, why would the load from ODS to cube fail? (There are no lock entries in the system). Does anyone have any idea?

Page 33: BW - Prepared

33 | P a g e B W I V I E W

A) You must have a Job Log in SM37. If not the job was never started. Q) I have checked up sm37, the job shows as complete! But the request status under InfoCube à manage à request tab is still showing yellow. Assuming a false status, I have checked for the data in the cube. Nope. No data there either. Do you have any clue why would the job log show as complete, where as no data appears in the cube? Regarding the export datasource issue, I have tried that too before posting the question. In fact, why would you want to go with export datasource when you would have already created an export datasource by creating update rules between ODS and cube! Sorry for the silly question, but any help in this regard is highly appreciated. A) Hi, Maybe you have to do a 'Generate Export Datasource' by right-click on the ODS. Q) Actually I'm trying to create a simple standard, [order -Flat data] ODS, while I tried activating it in Info provider- RSA1, it gets saved but not getting activated, throws up error. I'm working on BW-3.1version; I enabled only BEx report in Settings. The errors are 1. I couldn't find the 0RECORDMODE object. BW Delta process: updatemode object in the left hand side KF or Char Info objects. So, I tried to insert the 0RECORDMODE object on the right hand side, to Data fields folder, it shows the object while trying to find 0RECORDMODE in INSERT INFOOBJECTS options [on right click at Data Fields Folder], But once I enter this object, said continue, it didn't add to the DATA FILED Folder side along with my keyfigures. 2 Could not write in to the output device T'- Error: So, I just tried to activate the ODS object I'm getting the error 'could not write in to the output device T' in status bar. Also the status is inactive. What could be the error? Q) I need to populate the results of my query into another InfoCube. Any ideas on how I should go about this? Q) For the current project I am looking at, they have 2 major companies in their Group. Both companies are on different clients in R3. Their configuration and system setup is different in R3. Is it advisable to separate their BW systems to 2 different clients as well? Or is it recommended to actually fit both in one? What's the best practice? Q) I am creating a CO-PA datasource. I successfully set the business content datasources into active versions in R/3. Then, I try to create CO-PA datasource for Transaction data using KEB0. However, I cannot see any value fields.... you know all the VVOCO and those things. Characteristics from Segment Item and Line model and others including Calculated Key figures are available fine, except the value fields. Is there any way I can set to make value fields available?

Page 34: BW - Prepared

34 | P a g e B W I V I E W

Q) While executing the query it generally takes the default name of the excel sheet as BOOK1, BOOK2, but My client wants that the default name should be the same name as query name. A) Embed the query in a workbook saved as the name of the query and have your client open the workbook as opposed to the query. Q) Considering that I have in this infocube 6 dimensions, 30 characteristics and 20 key figures, do you see any simply any way to make my upload process easier? Do you think that the server will support that amount of data? Or which considerations should I add to make sure that this upload process would run? Q) We need to find the table in which query and variable assignments are stored. We must read this table in a user-exit to get which variables are used in a query. A) Check out tables RSZELTDIR and RSZCOMPDIR for query BEx elements. I found this info on one of the previous posting, variable table, RSZGLOBV, query table - get query id from table RSRREPDIR (field RSRREPDIR-COMPUID), use this id for table start with RSZEL* When 'ZFISPER1' name of the variable when VNAM = 'ZVC_FY1' - characteristics. Step 1 - before selection Step 2 - after selection Step 3 - processed all variable at the same time Q) Actually the ODS has data till date (09 Dec) which is coming from 2 datasources, but the infocube has data upto 8 November only as we have deleted few requests because there was mismatch of notification complaints encountered during reporting. So we tried to update the data thru "Update ODS Data in data target" by selecting the delta update option, we are getting an error message called "Delta update for ZUODEC02 (ODS) is invalidated". Please let me know the solution to come out this problem. Can we go for Initialize delta update again for cube? Q) How is the display of USD in currency controlled to be seen as USD or $? A) You can control Currency display with the following customizing point. In BW customizing BW Customizing Implementation Guide à Reporting-relevant Settings à General Reporting Settings à Set Alternative Currency Display In this table, you can specify the symbol or string you want to use. If this table is empty USD, the symbol used in BEx restitution is $.

Page 35: BW - Prepared

35 | P a g e B W I V I E W

Q) Deleting Data from PSA? A) Context menu PSA and delete data or context menu à Edit several requests delete the required request and in reconstruction tab of cube manage delete request. Q) If u update data from ODS to data target. System will generate InfoSource with one prefix name? A) It will generate with prefix name starting with 8 along with InfoSource name. Q) How to check physically about data load? A) At the bottom while updating. tRFC8 ani naming convention tho executing avuthu untundhi -- just have a look when u update from ODS. Q) What is an aggregate? A) Aggregates are small or baby cubes. A subset of InfoCube. Flat Aggregate --when u have more than 15 characteristics for aggregate system will generate that aggregate as flat aggregate to increase performance. Roll-up--when the data is loaded for second time into cube, we have to roll-up to make that data available in aggregate. Q) X & Y Tables? A) X-tables and Y-tables only have primary key. X-table (SID- key relationship plus SID columns per time-independent navigational attribute) or a Y-table (SID- key relationship. Timestamp, SID columns per time-dependent navigational attribute) Q) Routine with Return Table A) Update rules generally only have one return value. However, you can create a routine in the tab strip key figure calculation, by choosing Return table. The corresponding key figure routine then no longer has a return value, but a return table. You can then generate as many key figure values, as you like from one data record. In the routine editor, you will find the calculated characteristic values in the structure ICUBE_VALUES. Change these values accordingly (in the above example: Employee), fill the

Page 36: BW - Prepared

36 | P a g e B W I V I E W

field for the relevant key figure (in the above example: Sales revenue), and use this to fill the return table RESULT_TABLE Q) What is Line-Item Data, in which scenario u use line item dimension? A) A line item dimension in a fact table does not have the dimension table, it connects directly with the SID table at its sole characteristic. When there is only one characteristic in a dimension, it will eliminate the dimension table. Fact table directly linked to the SID table. Compressing InfoCubes Use When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube. However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in Reporting, as the system has to aggregate using the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs. You must be absolutely certain that the data loaded into the InfoCube is correct. Functions You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and link it to other events. Compressing one request takes approx. 2.5 ms per data record. With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record. If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube while the compression is running. With other manufacturers' databases, you will see a warning if you try to carry out a report using an InfoCube while the compression is running. In this case you can only report on the relevant InfoCube when the compression has finished running. If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries, where all the key figures are equal to 0, are deleted from the fact table.

Page 37: BW - Prepared

37 | P a g e B W I V I E W

Zero-elimination is permitted only for InfoCubes, where exclusively key figures with the aggregation behavior 'SUM' appear. You are not permitted to run a zero-elimination with non-cumulative values in particular. Activities For performance reasons, and to save space on the memory, summarize a request as soon as you have established that it is correct, and is no longer to be removed from the InfoCube. Pasted from <http://www.ittestpapers.com/articles/714/7/SAP-BW-Interview-Questions---Part-B/Page7.html> Q) You can convert currencies at 2 places? A) One is at update rule level & one at the front-end (reports lo) Q) You can create infospokes only on 4 objects. A) ODS, InfoCubes (Master data attributes and texts) & not on hierarchies and multiproviders. I mean for data offloading from BW to excel. Not only into excel but also to application server and also to database tables. Q) If you use process chains, you can Automate the complex schedules in BW with the help of Event-controlled processing, Visualize the schedule by using network applications and Centrally control and monitor the processes. Q) Transportation For BEx you can have only one transport request. There should always be one transport open if you want to create or edit queries. All the queries which u create or edit are assigned to this request. Suppose there is no open request, then you cannot edit or create any queries Once you have some queries in a request and you want to transport then you can release this one and immediately create a new one so that u can again create or change queries.

Page 38: BW - Prepared

38 | P a g e B W I V I E W

Q) Our is a SAP-BW with a non-sap source system. Everyday for client information we usually check the data loading thru' monitor. Is there any ways so that I can connect our mail-server to it & get automatic e-mails from BW server about every time after the transaction data is loaded? A) Go to SBWP Tcode there u have set for getting automatic mails. A) Write a transfer routine for the infoObjects to extend the length of the data type. Q) How can we stop loading data into infocube? A) First you have to find out what is the job name from the monitor screen for this load monitor screen lo Header Tabstrip lo untadhi. SM37 (job monitoring) in r/3 select this job and from the menu u can delete the job. There are some options, just check them out in sm37 and also u have to delete the request in BW. Cancellation is not suggestible for delta load. Q) If the Ship to address is changed, will it trigger delta for the 2LIS_11_VAITM extractor? A) I suppose you are talking of changing the "Ship to" address information in the Ship to master data. That has nothing to do with Transactional Data. Therefore no delta will be in the 2LIS_11_VAITM extractors. Address is changed for ShipTo on the sales order document (not Master data) and this address is saved in ADRC table. Q) How to reset the data mart status in the ODS. Which transaction is used for resetting the status? A) Go to the ODS and from context menu, Manage Reset the Data Mart Status (third column). If necessary restart the load from the DM which at InfoSource = "8ODS-Name" using Update Tab" Initialize Delta Process" Initial Load without Data Transfer" after resetting the Data Mart Status. Q) We are facing a problem with authorizations for BPS. We have a large number of agents that need to enter plan data via a layout. In order to simplify the control of the necessary authorizations, we would like to filter via something similar to a user exit using a function module in order to avoid having to define authorization objects for each of the agents who have access to the systems. Right now, we are not sure if there is user exit concept available as it is for BW variables. A) In BPS, you can use user specific variables or you can set up a Variable of type exit. You can also have a variable of type authorization which uses the security / authorization of the BW system.

Page 39: BW - Prepared

39 | P a g e B W I V I E W

Q) Here are the steps for Process chain A) Call transaction RSPC Create a process chain with the start process; herein you can > schedule > your whole Process chain (at once, by an event, by day, and so on) Call the icon "process types" > within the process types: Call the infopackage you need to load the data from the source > system to > ODS > Activate the ODS à Update the ODS into the InfoCube à Rollup the data from the InfoCube to the aggregate the connection between the steps from 2 until 7 are connected by à EVENTS; you à can create events by pressing the right mouse click down from the à predecessor à process to the successor process; the system asks you à independent from à error, only by error, only by success". In such a way you can create the process chain Copying Process chains, there is a way to copy process chain. In the RSPC view, open the desired process chain to be copied. Type COPY in the transaction-calling field on left top of the screen. The process chain will be copied then. However, I am using BW 3.1C. Perhaps this works in 3.0 as well. Q) I want to add the ADNR (address number) of the ship-to party to the standard extractor in LBWE. We have ONE-Time ship-to recipients and we want to be able to report on the STATE of the recipient rather than the state of the sold-to. I just wanted verification that I might be going in the right direction.. In order to add this field could I add the ADNR to the include MCPARTUSR section of the MCPARTNER structure? (And hopefully those fields would then be available in any of the MCVBAK, MCVBAP Communication structures...?) Or add an additional append ZMCPART, and then populate the field with a user exit during the extraction process. Or I could add it directly to the MCVBAK in a "Z". Append (?). Then populate the field with a user exit during the extraction process. Has anyone attempted something like this before? Is that what I need to do to get it into the communication structures for LBWE the logistic cockpit to use it? Haven't seen many posts or documentation that talks about specifics. Saw a bunch on master data augmentation, but not on the transaction extractors A) We ultimately did add a few fields to the structure, and then populated them with a user exit. Every document is associated with an ADDR, so why not give the 0DOCUMENT an attribute called ADDR or a full set of attributes that include addresses. Then build an infoset on R/3 to populate the 0DOCUMENT object directly with addresses. Treat the addr as "master" data of the 0document object rather than part of the transactional data. I THINK it would work but am not certain yet. Q) Can any body know how to create datamarts for Masterdata characteristics? Here for example I have two master data characteristics like 0Mat_plant and Zmatplt in the same BW system. Now I want to upload the data from 0Mat_plant to Zmat_plant using the delta enabled infopackage.

Page 40: BW - Prepared

40 | P a g e B W I V I E W

A) Datamarts are a functionality of an ODS you need one extraction program created by the ODS to make the upload into the data target) and not a functionality from Master data tables. Therefore create one ODS for jour object "like 0Mat_plant" Q) To check the version of particular infoobject in table RSDIOBJ Q) I have a break on one document between R/3 and BW. The document does exist in BSEG, but is missing from BW. All the other documents that were posted during that time were extracted by BW except this one. What could be the reason? A) The record may be missing because of any logic in update rules or transfer rules logic. I mean to say that there may be any filter on that particular record in Update or transfer rules. First check whether the record exist in RSA3. Then check in PSA whether that record is in BW or not. Then check the update rules. Q) What's the difference between R3 drill down reporting and BW reporting? These two reporting seem to function similarly with slice and dice. If they function similarly, then that means we don't have to implement BW, just use R3 Drill down reporting tool, am I right? A) The major benefits of reporting with BW over R/3 are in terms of performance and analysis. 1. Performance -- Heavy reporting along with regular OLTP transactions can produce a lot of load both on the R/3 and the database (cpu, memory, disks, etc). Just take a look at the load put on your system during a month end, quarter end, or year-end -- now imagine that occurring even more frequently. 2. Data analysis -- BW uses a Data Warehouse and OLAP concepts for storing and analyzing data, where R/3 was designed for transaction processing. With a lot of work you can get the same analysis out of R/3 but most likely would be easier from a BW. Pasted from <http://www.ittestpapers.com/articles/714/8/SAP-BW-Interview-Questions---Part-B/Page8.html> What is ODS? It is operational data store. ODS is a BW Architectural component that appears between PSA ( Persistant Staging Area ) and infocubes and that allows Bex ( Business Explorer ) reporting. It is not based on the star schema and is used primarily for details reporting, rather than for dimensional analysis. ODS objects do not aggregate data as infocubes do. Data are loaded into an IDS object by inserting new records, updating existing records, or deleting old records as specified by RECORDMODE value. *-- Viji

1. How much time does it take to extract 1 million of records from an infocube? 2. How much does it take to load (before question extract) 1 million of records to an infocube? 3. What are the four ASAP Methodologies? 4. How do you measure the size of infocube? 5. Difference between infocube and ODS? 6. Difference between display attributes and navigational attributes? *-- Kiran

Page 41: BW - Prepared

41 | P a g e B W I V I E W

1. Ans. This depends,if you have complex coding in update rules it will take longer time,orelse it will take less than 30 mins. 3. Ans: Project plan Requirements gathering Gap Analysis Project Realization 4. Ans: In no of records 5. Ans: Infocube is structured as star schema(extended) where in a fact table is surrounded by different dim table which connects to sids. And the data wise, you will have aggregated data in the cubes. ODS is a flat structure(flat table) with no star schema concept and which will have granular data(detailed level). 6. Ans: Display attribute is one which is used only for display purpose in the report.Where as navigational attribute is used for drilling down in the report.We don't need to maintain Nav attribute in the cube as a characteristic(that is the advantage) to drill down. *-- Ravi

Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT? Ans: But how is it possible?.If you load it manually twice, then you can delete it by request. Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL? Sure you can.ODS is nothing but a table. Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE? Yes ofcourse.For example, for loading text and hierarchies we use different data sources but the same infosource. Q4. BRIEF THE DATAFLOW IN BW. Data flows from transactional system to analytical system(BW).DS on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively. Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER RULES? Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS? Full and delta. Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT? No lis in lo cockpit.We will have data sources and can be maintained(append fields).Refer white paper on LO-Cokpit extractions. Q8. SIGNIFICANCE OF ODS. It holds granular data. Q9. WHERE THE PSA DATA IS STORED? In PSA table. Q10.WHAT IS DATA SIZE? The volume of data one data target holds(in no.of records) Q11. DIFFERENT TYPES OF INFOCUBES. Basic,Virtual(remote,sap remote and multi) Q12. INFOSET QUERY. Can be made of ODSs and objects Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE. In R/3 or in BW??.2 in R/3 and 2 in BW

Page 42: BW - Prepared

42 | P a g e B W I V I E W

Q14. ROUTINES? Exist In the info object,transfer routines,update routines and start routine Q15. BRIEF SOME STRUCTURES USED IN BEX. Rows and Columns,you can create structures. Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX? Variable with default entry Replacement path SAP exit Customer exit Authorization Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING? You can drill down to any level you want using Nav attributes and jump targets Q18. WHAT ARE INDEXES? Indexes are data base indexes,which help in retrieving data fastly. Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS. Help!!!!!!!!!!!!!!!!!!!Refer documentation Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED. Nope Q21. WHAT IS THE SIGNIFICANCE OF KPI'S? KPI’s indicate the performance of a company.These are key figures Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION. After image(correct me if I am wrong) Q23. REPORTING AND RESTRICTIONS. Help!!!!!!!!!!!!!!!!!!!Refer documentation Q24. TOOLS USED FOR PERFORMANCE TUNING. ST*,Number ranges,delete indexes before load ..etc Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING DATA DAILY. There should be some tool to run the job daily(SM37 jobs) Q26. AUTHORIZATIONS. Profile generator Q27. WEB REPORTING. What are you expecting?? Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN BE INFOPROVIDER. Of course Q29. PROCEDURES OF REPORTING ON MULTICUBES. Refer help.What are you expecting??.Multicube works on Union condition Q30. EXPLAIN TRANPORTATION OF OBJECTS? Dev ---> Q and Dev ---> P Pasted from <http://www.sap-img.com/business/sap-bw-interview-questions.htm> 1.What are the differences between a OLAP and OLTP applications OLAP a.Summarized data b.Read only c.Not Optimized d.Lot of historical data OLTP Detailed

Page 43: BW - Prepared

43 | P a g e B W I V I E W

Read write Optimized for data applications Not much old data 2.What is a star schema? A fact table at the center and surrounded (linked) by dimension tables 3.What is a slowly changing dimension? A dimension containing a characteristics which changes over a time; for example take employee job title; this changes over a period of time with different job titles 4. What are the advantages of Extended star schema of BW vs. the star schema a. use of generated keys (numeric) for faster access b. external hierarchy c. multi language support d. master data common to all cubes e. slowly changing dimensions supported f.aggregates in its own tables for faster access 5.What is the namespace for BW? All SAP objects start with 0 and customer is A-Z; all tables begin with /BIO for SAP and /BIC for customers; All generated objects start with 1-8 (like export data source); prefix 9A is used in APO. 6.What is an info object? Business object like customer, product, etc; they are divided into characteristics and key figures; characteristics are evaluation objects like customer and key figures are measurable objects like sales quantity, etc; characteristics also include special objects like unit and time. 7. What are the data types supported by characteristics? NUMC, CHAR (up to 60), DATS and TIMS 8.What is an external hierarchy? Presentation hierarchies are stored in its own tables (hierarchy tables) for characteristic values 9.What are time dependent text / attribute of characteristics? If the text (for example a name of the product /person over time) or if the attribute changes over time (for example job title) then these must be marked as time dependent. 10. Can you create your own time characteristics? No 11. What are the types of attributes? Display only and navigational; display only attributes are only for display and no analysis can be done; navigational attributes behave like regular characteristics; for example assume that we have a customer characteristics with country as a navigational attribute; you can analyze the data using customer and country. 12. What is Alpha conversion? Alpha conversion is used to store data consistently by storing any numeric values with prefixing with Os; for example if you defined material as 6 Numc then number 1 is stored as 000001 but displayed as 1; this removes inconsistencies between 01 vs. 001. 13. What is the alpha check execution program? This is used to check consistency for BW 2.x before upgrading the system to 3.x; the transaction is RSMDCNVEXIT 14. What is the attributes only flag? If the flag is set, no master data is stored; this is only used as attribute for other haracteristics; for example comments on a AR document. 15.What is compounding? This defines the superior info object which must be combined to define an object; for example when you define cost center then controlling area is the compounding (superior) object. 16.What is the Bex options for characteristics like F4 help for query definition and execution?

Page 44: BW - Prepared

44 | P a g e B W I V I E W

This defines how the data is displayed in query definition screen or when query is executed; options are from the data displayed, from master data table (all data) and from dimension data; for example let us assume that you have 100 products in all, 10 products in a cube; in bex you display the query for 2 products; the following options for product will display different data a. selective data only - will display 2 products b. dimension data - will display 10 products c. from master data - will display all 100 products 17. What are the data types allowed for key figures? Amount, number, integer, date and time. 18. what is the difference between amount/quantity and number - amount/quantity always comes with units; for example sales will be amount and inventory quantity. 19. What are the aggregation options for key figures? If you are defining prices then you may want to set "no aggregation" or you can define max, min, sum; you can also define exception aggregation like first, last, etc; this is helpful in getting headcount; for example if you define a monthly inventory count key figure you want the count as of last day of the month. 20. What is the maximum number of key figures you can have in an info cube? 233 21. What is the maximum number of characteristics you can have per dimension? 248 Pasted from <http://gleez.com/articles/career-job-skills/interview/sap-interview-questions> 22. What are the nine decision points of data warehousing? a. Identify fact table b. Identify dimension tables c. Define attributes of entities d. Define granularity of the fact table (how often e. Pre calculated key figures f. Slowly changing dimensions g. Aggregates h. How long data will be kept i. How often data is extracted 23. How many dimensions in a cube Total 16 out of which 3 are predefined time, unit and request; customer is left with 13 dimensions 24. What is a SID table and advantages The SID table (Surrogate ID table) is the interface between master data and the dimension tables; advantages :- a. uses numeric as indexes for faster access b. master data independent of info cubes c. language support d. slowly changing dimension support 25. What are the other tables created for master data? a. P table - Time independent master data attributes b. Q table - Time dependent master data attributes c. M view - Combines P and Q d. X table - Interface between master data SIDs and time independent navigational attributes SIDs (P is linked to the X table) e. Y table - Interface between master data SIDs and time dependent navigational attributes SIDs (Q is linked to the Y table)

Page 45: BW - Prepared

45 | P a g e B W I V I E W

26. What is the transfer routine of the info object? It is like a start routine; this is independent of the data source and valid for all transfer routines; you can use this to define global data and global checks. 27. What is the DIM ID? Dim ids link dimensions to the fact table 28. What is table partition? SAP is using fact table partitioning to improve performance; you can partition only on OCALMONTH or OFISCPER; 29. How many extra partitions are created and why? Usually 2 extra partitions are created to accommodate data before the begin date and after the end date 30. Can you partition a cube which has data already? No; the cube must be empty to do this; one work around is to make a copy of the cube A to cube B; export data from A to B using export data source; empty cube A; create partition on A; re-import data from B; delete cube B 31. What is the transaction for Administrator work bench? RSA1 32. What is a source system? Any system that is sending data to BW like R/3, flat file, oracle database or external systems. 33. What is a data source? The source which is sending data to a particular info source on BW; for example we have a OCUSTOMER_ATTR data source to supply attributes to OCUSTOMER from R/3 34. What is an info source? Group of logically related objects; for example the OCUSTOMER info source will contain data related to customer and attributes like customer number, address, phone no, etc 35. What are the types of info source? Transactional, attributes, text and hierarchy 36. What is communication structure? Is an independent structure created from info source; it is independent of the source system/data source 37. What are transfer rules? The transformation rules for data from source system to info source/communication structure 38. What is global transfer rule? This is a transfer routine (ABAP) defined at the info object level; this is common for all source systems. 39. What are the options available in the transfer rule Assign info object, assign a constant, ABAP routine or a Formula (From version 3.x); example are : a. Assign info object - direct transfer; no transformation b. Constant - for example if you are loading data from a specified country in a f lat file, you can make the country as constant and assign the value c. ABAP routine - for example if you want to do some complex string manipulation; assume that you are getting a flag file from legacy data and the cost center is in a field and you have to "massage" the data to get it; in this case use ABAP code d. For simple calculations use formula; for example you want to convert all lower case characters to upper case; use the TOUPPER formula 40. Give some important formula available Concatenate, sub string, condense, left/right (n characters),1_trim, r_trim, replace, date routines like DATECONV, date-week, add_to_date, date_diff, logical functions like if, and; 41. When you do the ABAP code for transfer rule, what are the important variables you use? a. RESULT - this gets the result of the ABAP code b. RETURNCODE - you set this to 0 if everything is OK; else this record is

Page 46: BW - Prepared

46 | P a g e B W I V I E W

skipped c. ABORT - set this to a value not 0, to abort the entire package 42. What is the process of replication? This copies data source structures from R/3 to BW 43. What is the update rule? Update rule defines the transformation of data from the communication structure to the data targets; this is independent of the source systems/data sources 44. What are the options in update rules? a. one to one move of info objects b. constant c. lookup for master data attributes d. formula e. routine (ABAP) f. initial value 45. What are the special conversions for time in update rules? Time dimensions are automatically converted; for example if the cube contains calendar month and your transfer structure contains date, the date to calendar month is converted automatically. 46. What is the time distribution option in update rule? This is to distribute data according to time; for example if the source contains calendar week and the target contains calendar day, the data is split for each calendar day. Here you can select either the normal calendar or the factory calendar. 47. What is the return table option in update rules for key figures? Usually the update rule sends one record to the data target; using this option you can send multiple records; for example if we are getting total telephone examples for the cost center, you can use this to return telephone expenses for each employee (by dividing the total expenses by the number of employees in the cost center) and creating cost record for each employee in the ABAP code. 48. What is the start routine? The first step in the update process is to call start routine; use this to fill global variables to be used in update routines; 49. How would you optimize the dimensions? Use as many dimensions as possible for performance improvement; for example assume that you have 100 products and 200 customers; if you make one dimension for both, the size of the dimension will be 20,000; if you make individual dimensions then the total number of rows will be 300. Even if you put more than one characteristic per dimension, do the math considering worst case scenario and decide which characteristics may be combined in a dimension. 50. What is the conversion routine for units and currencies in the update rule? Using this option you can write ABAP code for unit/currency conversion; if you enable this flag, then unit of the key figure appears in the ABAP code as an additional parameter; for example you can use this to convert quantity in pounds to kilo grams. Pasted from <http://gleez.com/sap-bw/sap-bw-interview-questions/sap-interview-questions-part-2> 1. What is table partition? A: SAP is using fact table partitioning to improve the performance. you can partition only on 0CALMONTH or 0FISCPER 2. What are the options available in transfer rule and when ABAP code is recquired during the transfer rule what important variables you can use? A: Assign info object, Assign a Constant , ABAP routine or a Formula 3. How would you optimize the dimensions?

Page 47: BW - Prepared

47 | P a g e B W I V I E W

A: Use as many as possible for performance improvement; Ex: Assume that u have 100 products and 200 customers; if you make one dimension for both ,the size of the dimension will be 20000; if you make individual dimensions then the total number of rows will be 300. Even if you put more than one characterstic per dimension, do the math considering worst case senerio and decide which characterstics may be combined in a dimension. 4. What are the conversion routines for units and currencies in the update rule? A: Time dimensions are automatically converted; Ex: if the cube contains calender month and your transfer structure contains date, the date to calender month is converted automatically. 5. Can you make an infoobject as info provider and why? A. Yes, When you want to report on characterstics or master data, you can make them as infoprovider. Ex: you can make 0CUSTMER as infoprovider and do Bex reporting on 0 CUSTOMER;right click on the infoarea and select 'Insert characterstic as data target'. 6. What are the steps to unload non cumulative cubes? A: 1. Initialize openig balance in R/3(S278) 2. Activate extract structure MC03BF0 for data source 2LIS_03_BF 3. setup historical material docus in R/3. 4. load opening balance using data source 2LIS_40_s278 5. load historical movements and compress without marker update. 6. setup V3 Update 7. load deltas using 2LIS_03_BF 7. Give step to step approach to archiving cubex. A: 1. double click on the cube (or right click and select change) 2. Extras -> Select archival 3. Choose fields for selection(like 0CALDAY, 0CUSTOMER..etc) 4. Define the file structure(max file size and max no of data objects) 5. Select the folder(logical file name) 6. Select the delete options (not scheduled, start automatically or after event) 7. activate the cube. 8. cube is ready for archival. 8. What are the load process and post processing? A: Info packake, Read PSA and update data target, Save Hierarchy, Update ODS data object, Data export(open hub), delete overlapping requests. 9. What are the data target administration task A: delete index, generate index, construct database statistics, initial fill of new aggregates, roll up of filled aggregates, compression of the infocube,activate ODS, complete deletion of data target. 10. What are the parallel process that could have locking problems A: 1. heirachy attribute change run 2. loading master data from same infoobject; for ex: avoid master data from different source systems at the same time. 3. rolling up for the same info cube. 4. selecting deletion of info cube/ ODS and parallel loading. 5. activation or delection of ODS object when loading parallel. 11. How would you convert a info package group into a process chain? A: Double Click on the info package grp, click on the 'Process Chain Maint' button and type in the name and descrition ; the individual info packages are inserted automatically. 12. How do you transoform Open Hub Data? A: Using BADI 13. What are the data loading tuning one can do? A: 1. watch the ABAP code in transfer and update rules; 2. load balance on different servers

Page 48: BW - Prepared

48 | P a g e B W I V I E W

3. indexes on source tables 4. use fixed length files if u load data from flat files and put the file on the application server. 5. use content extractor 6. use PSA and data target inparallel option in the info package 7. start several info packagers parallel with different selection options 8. buffer the SID number ranges if u load lot of data at once 9. load master data before loading transaction data. 14. What is ODS? A: Operations data Source . u can overwrite the existing data in ODS. 15. What is the use of BW Statistics? A: The sets of cubes delivered by SAP is used to measure performance for query, loading data etc., It also shoes the usage of aggregates and the cost associated with then. 16. What are the options when definging aggregates? A: * - groups accotding to characterstics H - Hierarchy F - fixed value Blank --- none 17. How will you debug errors with SAP GUI (like Active X error etc) A: Run Bex analyzer -> Business Explorer menu item -> Installation check; this shows an excel sheet with a start button; click on it; this verifies the GUI installation ;if u find any errors either reinstall or fix it. 18. When you write user exit for variables what does I_Step do? A: I_Step is used in ABAP code as a conditional check. 19. How do you replace a query result from a master query to a child query? A: If you select characterstic value with replacement path then it used the results from previuos query; for ex: let us assume that u have query Q1 which displaysthe top 10 customers, we have query Q2 which gets the top 10 customers for info object 0customer with as a vairable with replacement path and display detailed report on the customers list passed from Q1. 20. How do you define exception reporting in the background? A: Use the reporting agent for this from the AWB. Click on the exception icon on the left;give a name and description. Select the exception from query for reporting(drag and drop). 21. What kind of tools are available to monitor the overall Query Performance? 22. What can I do if the database proportion is high for all queries? 23. How to call a BW Query from an ABAP program? 24. What are the extractor types? 25. What are the steps to extract data from R/3? 26. What are the steps to configure third party (BAPI) tools? 27. What are the delta options available when you load from flat file? 28. What are the table filled when you select ABAP? 29. What are the steps to create classes in SAP-BW 30. How do you convert from LIS to LO extraction? 31. How do you keep your self-updated on major SAP Develppments, Give an analytical Pasted from <http://www.sap-basis-abap.com/bw/sap-bw-interview-questions.htm> SAP BW Questions - Some Real Question 1. Differences b/w 3.0 and 3.5 2. Differences b/w 3.5 and BI 7.0 3. Can you explain a life cycle in brief 4. Difference b/w table & structure

Page 49: BW - Prepared

49 | P a g e B W I V I E W

5. Steps of LO 6. Steps of LIS 7. Steps of generic 8. What is index and how do you increase performance using them 9. How do you load deltas into ODS and cube 10. Example of errors while loading data and how do u resolve them 11. How do you maintain work history until a ticket is closed 12. What is reconciliation 13. What is the methodology u use before implementation 14. What are the role & responsibilities when you are in implementation and while in support also Major Differences between Sap Bw 3.5 & sapBI 7.0 version 1. In Infosets now you can include Infocubes as well. 2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube. 3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM. 4. The monitoring has been imprvoed with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal ! :) 5. Search functionality hass improved!! You can search any object. Not like 3.5 6. Transformations are in and routines are passe! Yess, you can always revert to the old transactions too. 7. The Data Warehousing Workbench replaces the Administrator Workbench. 8. Functional enhancements have been made for the DataStore object: New type of DataStore object Enhanced settings for performance optimization of DataStore objects. 9. The transformation replaces the transfer and update rules. 10. New authorization objects have been added 11. Remodeling of InfoProviders supports you in Information Lifecycle Management. 12 The Data Source: There is a new object concept for the Data Source. Options for direct access to data have been enhanced. From BI, remote activation of Data Sources is possible in SAP source systems. 13.There are functional changes to the Persistent Staging Area (PSA). 14.BI supports real-time data acquisition. 15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ Major differences include: a) Renamed ODS as DataStore. b) Inclusion of Write-optmized DataStore which does not have any change log and the requests do need any activation c) Unification of Transfer and Update rules d) Introduction of "end routine" and "Expert Routine" e) Push of XML data into BI system (into PSA) without Service API or Delta Queue f) Intoduction of BI accelerator that significantly improves the performance. g) Load through PSA has become a must. I am not too sure about this. It looks like we would not have the option to bypass the PSA Yes, 16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update

Page 50: BW - Prepared

50 | P a g e B W I V I E W

rules. Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine". during data load. New features in BI 7 compared to earlier versions:

i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition (RDA). ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options. iii. One level of Transformation. This replaces the Transfer Rules and Update Rules iv. Performance optimization includes new BI Accelerator feature. v. User management (includes new concept for analysis authorizations) for more flexible BI end user authorizations. ===================================== 2. Complete life-cycle implementation of SAP BW, which includes data modeling, data extraction, data loading, reporting and support. ============================== 3A. When they refer to the full life cycle its the ASAP methodology which SAP recommends to follow for all its projects. 1.Project peparation. 2.Business Blueprint. 3.Realization. 4.Fit-Gap Analysis 5.Go-Live Phase. Normally in the first phase all the management guys will sit with in a discussion . In the second phase u will get a functional spec n basing on that u will have a technical spec. In the third phase u will actually inplement the project and finally u after testing u will deploy it for production i.e. Go-live . You might fall under and get involved in the realization phase . If its a supporting project u will come into picture only after successful deployment. =========================== 5A. LO extraction is one stop shop for all the logistics extarctions . LO dataosuurces come as a part of the business content in BW . We need to transfer them from the BCT and then activate them from D version to the active Verion A . Go to the tcode -- RSA5 ---- Transfer the desired datasource . and then LBWE to maintain the extract structure LBWG -- to delete the setup tables ---- why ... ? OLI*bw -- Statistcal initialization . LBWQ --- To delete the extractor queue SM13 -- to delete the update queue . LBWf -- to get the log . Its always recommended to have the Queued delta as an update method , since it uses the collective run to schedule all the changed records . Once the intial run is over set the delta method Queued to periodic for further delta load Why ..? We need to delete the setup tables since we need to delete the data that is already in them and also because we will change the ES by adding the required fields from the R/3 communuication structrure . we can also select and hide the fields here .. all the filds in blue are mandatory .. If the required fields are not avilable we will go for the DS enhancments . ================================= 6A. LIS uses information structures to extract the data .it comes under the application specific customer generated extractions. The range is 000 to 999. 0-499 is for SAP structures n 500 to 999 are for customer defined . We need to consider the two cases if its the SAP defined r the customer defined .

Page 51: BW - Prepared

51 | P a g e B W I V I E W

Lets see the SAP defined first : Tcode -- LBW0 give the information structure and selct the option display settings to know the status .if u seelct the generate datasource option it will throw an error saying that u cannot create in the SAp name range . You select the option the setup environment settings it will create the two tables n one structure with the naming convention .. 2LIS_application no _BIW1 ,2LIS_appl No _BIW2 and 2LIS _appl no_BIWS These two structures are used to enable the delta records interchangebaly --- we will come to know this in the table TMCBIW . Then go to LBW1 to change the version .. LBW2 to setup the no update update method .. now you do the full load and after that go LBW1 change the version and then go to LBW2 to setup the delta update n the periodic job . Now you can load the delta updates . If its in the case of the user defined : You need to create the IS MC18 --- create the IS MC19-- change MC20 -- dispaly MC21--MC22,,MC23,MC24,MC25,MC26 folwing to craete the update rules . Then Tcode LBW0 give the IS name and sel setup lis environment it will craete the the two tables n the structure .. You can use OMO1 to fill the tables n u do the full upload then u need to setup the delta by sel the steup delta .. You can set whether the delta is enabled are not using the option activate /deactivate deelta Both in cases you need while migrating the data you need to lock the setup tables to prevent users from entreing the transactions SE14 ... and after completion u need to unlock . But LO is preffered to LIS in all aspects .. LO provdes the IS upto the level of detail . Enhanced performance and also deletion of setup tables after updation which we never do in LIS . Pasted from <http://www.sap-basis-abap.com/bw/sap-bw-questions.htm> 7A. We opt for generic extraction whenever the desired datasource is not available in business content or if it is already used and we need to regenerate it . when u want to extract the data from table, view, infoset, function module we use generic extraction. In generic we create our own datasource and activate it . Steps : 1.Tcode is RSO2 2.Give the datasource name and designate it to a particular application component. 3. Give the short medium and long desc mandatory. 4. Give the table / Fm /view /infoset. 5. and continue which leads to a detail screen in which you can always hide, select ,inversion , field only options are available. HIDE is used to hide the fields .. it will not transfer the data from r/3 to BW SELECT -- the fields are available in the selection screen of the info package while u schedule it . INVERSION is for key figs which will operate with '-1' and nullify the value . Once the datasource is generated you can extract the data using it . And now to speak abt the delta ... we have .. 0calday , Numeric pointer , time stamp.

Page 52: BW - Prepared

52 | P a g e B W I V I E W

0calday -- is to be run only once a day that to at the end of the day and with a range of 5 min. Numeric Pointer -- is to be used for the tables where it allows only appending of records ana no change .. eg: CATSDB HRtime managent table . Timestamp: using this you can always delta as many times as possible with a upeprlimit . Whenever there 1:1 relation you use the view and 1:m you use FM. ============================ 8A. INDEX: They are used to improve the performance of data retrival while executing the queries or work books the moment when we execute and once we place the valiues in Selection criteria at that time the indexes will act as retrival point of data .. and it will fetch the data in a faster mannner .. Just common example if you can obeserve any book which has indexes .. This indexes will give a exact location of each topic based on this v can easily go to that particulur page and we will continue our thingz. Similar mannner in the data level the indexes are act in BW side... ============================= 9A. Time stamp error. sol: activate the data source and replicate the data source and load. 2. Data error in PSA. sol: Modify the in PSA error data and load. 3. RFC connection failed. sol: raise the ticket to BASIS team to provide the connection. 4. Short dump error. sol: delete the request and load once again. a) Loads can be failed due to the invalid characters b) Can be because of the deadlock in the system c) Can be becuase of previuos load failure , if the load is dependant on other loads d) Can be because of erreneous records e) Can be because of RFC connections.....(sol: raise the ticket to BASIS team to provide the connection) f) Can be because of missing master data. g) Can be because of no data found in the source system h) Invalid characters while loading. When you are loading data then you may get some special characters like @#$%...e.t.c..then BW will throw an error like Invalid characters..then you need to go through this RSKC transaction and enter all the Invalid chars and execute..it will store this data in RSALLOWEDCHAR table..Then reload the data..You won't get any error because now these are eligible chars..done by RSKC. i) ALEREMOTE user is locked Normally, ALEREMOTE gets locked due to a sm59 RFC destination entry having the incorrect password. You should be about to get a list of all sm59 RFC destinations using ALEREMOTE by using transaction se16 to search field RFCOPTIONS for a value of "*U=ALEREMOTE". You will need to look for this information in any external R/3 instances that call the instance in which ALEREMOTE is getting locked as well. j) Lower case letters not allowed. look at your infoobject description: Lowercase Letters allowed k)extraction job aborted in r3 It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance. l)datasource not replicated If the new DataSource is created in the source system, you should replicate this DataSource in 'dialog' mode. During the replication, you can decide whether this DataSource should be replicated as the 3.x DataSource or as the new DataSource. If you do not run the replication in 'dialog' mode, the DataSource is not replicated. m)ODS Activation Error ODS activation errors can occur mainly due to following reasons 1. Invalid characters (# like characters) 2. Invalid data values for units/currencies etc 3. Invalid values for data types of char & key figures.

Page 53: BW - Prepared

53 | P a g e B W I V I E W

4. Error in generating SID values for some data =================================== 10A Ticket is nothing but an issue or a process error which need to be addressed. There are two types of tickets. * ito tickets - which are usually generated by the system automatically when a process fails.for example, when a process chain fails toi run it wil generate an ito ticket which we need to address and find the fault. * non-ito tickets - which are the issues which the client face and which are forwarded for correction or alternative action. If you're using Remedy for tickets, Unfortunately it's not possible. But this depends on the software you are using, ask your admin. =========================== 11A Reconciliation is nothing but the comaprision of the values between BW target data with the Source system data like R/3, JD edwards,Oracle,ECC or SCM or SRM. In general this process is taken @ 3 places one is comparing the info provider data with R/3 data,Compare the Query display data with R/3 or ODS data and Checking the data available in info provider kefigure with PSA key figure values ==================== 12A ASAP Methodology 1. Project Preparation, in which the project team is identified and mobilized, the project standards are defined, and the project work environment is set up; 2. Blueprint, in which the business processes are defined and the business blueprint document is designed; 3. Realization, in which the system is configured, knowledge transfer occurs, extensive unit testing is completed, and data mappings and data requirements for migration are defined; 4. Final Preparation, in which final integration testing, stress testing, and conversion testing are conducted, and all end users are trained; and 5. Go-Live and Support, in which the data is migrated from the legacy systems, the new system is activated, and post-implementation support is provided ======================== 13A Responsibilities of an implementation project... For ex, Lets say If its a fresh implementation of BI or for that matter you are implementing SAP... First and foremost will be your requirements gathering from the client. Depending upon the requirements you will creat a business blueprint of the project which is the entire process from the start to the end of an implementation... After the blue print phase sign off we start off with the realization phase where the actual development happens... In our example after installing the necessary softwares, patches for BI we need to discuss with the end users who are going to use the system for inputs like how they want a report to look like and what are the Key Performance Indicators(KPI) for the reports etc., basically its a question and answer session with the business users... After collecting those informations the development happens in the development servers... After the development comes to an end the same objects are tested in quality servers for any bugs, errors etc., When all the tests are done we move all the objects to the production environment and test it again whether everything works fine... The Go-Live of the project happens where the actually postings happen from the users and reports are generated based on those inputs which will be available as an analytical report for the management to take decisions... The responsibilites vary depending on the requirement... Initially the business analyst will interact with the end users/managers etc., then on the requirements the software consultants do the development, testers do the testing and finally the go-live happens... What are the objects that we peform in a production Support project?

Page 54: BW - Prepared

54 | P a g e B W I V I E W

In production Suport Generally most of the project they will work on monitoring area for their loads(R3/ NON SAP to Data Taggets (BW)) and depending up the project to project it varies because some of them using the PC's and Some of them using Event Chains. So its Depends up on the Project to project varies. What are the different transactions that we use frequently in Production support project? Plz explain them in detial.. Generally In Production Support Project , we will use the check the loads by using RSMO for Monitoring the loads and we will rectify the errors in that by using step by step analysis. The consultant is required to have access to the following transactions in R3. 1. ST22 2. SM37 3. SM58 4. SM51 5. RSA7 6. SM13 Authorizations for the following transactions are required in BW 1. RSA1 2. SM37 3. ST22 4. ST04 5. SE38 6. SE37 7. SM12 8. RSKC 9. SM51 10. RSRV The Process Chain Maintenance (transaction RSPC) is used to define, change and view process chains. Upload Monitor (transaction RSMO or RSRQ (if the request is known) The Workload Monitor (transaction ST03) shows important overall key performance indicators (KPIs) for the system performance The OS Monitor (transaction ST06) gives you an overview on the current CPU, memory, I/O and network load on an application server instance. The database monitor (transaction ST04) checks important performance indicators in the database, such as database size, database buffer quality and database indices. The SQL trace (transaction ST05) records all activities on the database and enables you to check long runtimes on a DB table or several similar accesses to the same data. The ABAP runtime analysis (transaction SE30) The Cache Monitor (accessible with transaction RSRCACHE or from RSRT) shows among other things the cache size and the currently cached queries. The Export/Import Shared buffer determines the cache size; it should be at least 40MB. Pasted from <http://www.sap-basis-abap.com/bw/sap-bw-questions.htm> Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?

Ans: But how is it possible?.If you load it manually twice, then you can delete it by request. Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL?

Sure you can.ODS is nothing but a table. Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE?

Page 55: BW - Prepared

55 | P a g e B W I V I E W

Yes ofcourse.For example, for loading text and hierarchies we use different data sources but the same infosource. Q4. BRIEF THE DATAFLOW IN BW.

Data flows from transactional system to analytical system(BW).DS on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively. Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER RULES? Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS? Full and delta. Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT?

No lis in lo cockpit.We will have data sources and can be maintained(append fields).Refer white paper on LO-Cokpit extractions. Q8. SIGNIFICANCE OF ODS. It holds granular data. Q9. WHERE THE PSA DATA IS STORED? In PSA table. Q10.WHAT IS DATA SIZE?

The volume of data one data target holds(in no.of records) Q11. DIFFERENT TYPES OF INFOCUBES.

Basic,Virtual(remote,sap remote and multi) Q12. INFOSET QUERY.

Can be made of ODSs and objects Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.

In R/3 or in BW??.2 in R/3 and 2 in BW Q14. ROUTINES?

Exist In the info object,transfer routines,update routines and start routine Q15. BRIEF SOME STRUCTURES USED IN BEX.

Rows and Columns,you can create structures. Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?

Variable with default entry Replacement path SAP exit Customer exit Authorization Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING?

Page 56: BW - Prepared

56 | P a g e B W I V I E W

You can drill down to any level you want using Nav attributes and jump targets Q18. WHAT ARE INDEXES?

Indexes are data base indexes,which help in retrieving data fastly. Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS. Help!!!!!!!!!!!!!!!!!!!Refer documentation

Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED. Nope Q21. WHAT IS THE SIGNIFICANCE OF KPI'S? KPI’s indicate the performance of a company.These are key figures Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.

After image(correct me if I am wrong) Q23. REPORTING AND RESTRICTIONS.

Help!!!!!!!!!!!!!!!!!!!Refer documentation Q24. TOOLS USED FOR PERFORMANCE TUNING.

ST*,Number ranges,delete indexes before load ..etc Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING DATA DAILY.

There should be some tool to run the job daily(SM37 jobs) Q26. AUTHORIZATIONS.

Profile generator Q27. WEB REPORTING. What are you expecting??

Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN BE INFOPROVIDER.

Of course Q29. PROCEDURES OF REPORTING ON MULTICUBES.

Refer help.What are you expecting??.Multicube works on Union condition Q30. EXPLAIN TRANPORTATION OF OBJECTS?

Dev ---> Q and Dev ---> P 1. What is table partition?

A: SAP is using fact table partitioning to improve the performance. you can partition only on 0CALMONTH or 0FISCPER 2. What are the options available in transfer rule and when ABAP code is recquired during the transfer rule what important variables you can use?

A: Assign info object, Assign a Constant , ABAP routine or a Formula

Page 57: BW - Prepared

57 | P a g e B W I V I E W

3. How would you optimize the dimensions? A: Use as many as possible for performance improvement; Ex: Assume that u have 100 products and 200 customers; if you make one dimension for both ,the size of the dimension will be 20000; if you make individual dimensions then the total number of rows will be 300. Even if you put more than one characterstic per dimension, do the math considering worst case senerio and decide which characterstics may be combined in a dimension. 4. What are the conversion routines for units and currencies in the update rule? A: Time dimensions are automatically converted; Ex: if the cube contains calender month and your transfer structure contains date, the date to calender month is converted automatically. 5. Can you make an infoobject as info provider and why?

A. Yes, When you want to report on characterstics or master data, you can make them as infoprovider. Ex: you can make 0CUSTMER as infoprovider and do Bex reporting on 0 CUSTOMER;right click on the infoarea and select 'Insert characterstic as data target'. 6. What are the steps to unload non cumulative cubes?

A: 1. Initialize openig balance in R/3(S278) 2. Activate extract structure MC03BF0 for data source 2LIS_03_BF 3. setup historical material docus in R/3. 4. load opening balance using data source 2LIS_40_s278 5. load historical movements and compress without marker update. 6. setup V3 Update 7. load deltas using 2LIS_03_BF 7. Give step to step approach to archiving cubex.

A: 1. double click on the cube (or right click and select change) 2. Extras -> Select archival 3. Choose fields for selection(like 0CALDAY, 0CUSTOMER..etc) 4. Define the file structure(max file size and max no of data objects) 5. Select the folder(logical file name) 6. Select the delete options (not scheduled, start automatically or after event) 7. activate the cube. 8. cube is ready for archival. 8. What are the load process and post processing?

A: Info packake, Read PSA and update data target, Save Hierarchy, Update ODS data object, Data export(open hub), delete overlapping requests. 9. What are the data target administration task

A: delete index, generate index, construct database statistics, initial fill of new aggregates, roll up of filled aggregates, compression of the infocube,activate ODS, complete deletion of data target. 10. What are the parallel process that could have locking problems A: 1. heirachy attribute change run 2. loading master data from same infoobject; for ex: avoid master data from different source systems at the same time. 3. rolling up for the same info cube. 4. selecting deletion of info cube/ ODS and parallel loading.

Page 58: BW - Prepared

58 | P a g e B W I V I E W

5. activation or delection of ODS object when loading parallel. 11. How would you convert a info package group into a process chain?

A: Double Click on the info package grp, click on the 'Process Chain Maint' button and type in the name and descrition ; the individual info packages are inserted automatically. 12. How do you transoform Open Hub Data?

A: Using BADI 13. What are the data loading tuning one can do?

A: 1. watch the ABAP code in transfer and update rules; 2. load balance on different servers 3. indexes on source tables 4. use fixed length files if u load data from flat files and put the file on the application server. 5. use content extractor 6. use PSA and data target inparallel option in the info package 7. start several info packagers parallel with different selection options 8. buffer the SID number ranges if u load lot of data at once 9. load master data before loading transaction data. 14. What is ODS?

A: Operations data Source . u can overwrite the existing data in ODS. 15. What is the use of BW Statistics?

A: The sets of cubes delivered by SAP is used to measure performance for query, loading data etc., It also shoes the usage of aggregates and the cost associated with then. 16. What are the options when definging aggregates?

A: * - groups accotding to characterstics H - Hierarchy F - fixed value Blank --- none 17. How will you debug errors with SAP GUI (like Active X error etc)

A: Run Bex analyzer -> Business Explorer menu item -> Installation check; this shows an excel sheet with a start button; click on it; this verifies the GUI installation ;if u find any errors either reinstall or fix it. 18. When you write user exit for variables what does I_Step do? A: I_Step is used in ABAP code as a conditional check. 19. How do you replace a query result from a master query to a child query?

A: If you select characterstic value with replacement path then it used the results from previuos query; for ex: let us assume that u have query Q1 which displaysthe top 10 customers, we have query Q2 which gets the top 10 customers for info object 0customer with as a vairable with replacement path and display detailed report on the customers list passed from Q1. 20. How do you define exception reporting in the background?

A: Use the reporting agent for this from the AWB. Click on the exception icon on the left;give a name and description. Select the exception from query for reporting(drag and drop).

Page 59: BW - Prepared

59 | P a g e B W I V I E W

21. What kind of tools are available to monitor the overall Query Performance?

22. What can I do if the database proportion is high for all queries?

23. How to call a BW Query from an ABAP program?

24. What are the extractor types?

25. What are the steps to extract data from R/3? 26. What are the steps to configure third party (BAPI) tools?

27. What are the delta options available when you load from flat file?

28. What are the table filled when you select ABAP?

29. What are the steps to create classes in SAP-BW

30. How do you convert from LIS to LO extraction? 31. How do you keep your self-updated on major SAP Develppments, Give an analytical

Pasted from <http://www.saptechies.com/sap-bw-interview-questions/>

Page 60: BW - Prepared

60 | P a g e B W I V I E W

What are the extractor types? • Application Specific o BW Content FI, HR, CO, SAP CRM, LO Cockpit o Customer-Generated Extractors LIS, FI-SL, CO-PA • Cross Application (Generic Extractors) o DB View, InfoSet, Function Module 2. What are the steps involved in LO Extraction? • The steps are: o RSA5 Select the DataSources o LBWE Maintain DataSources and Activate Extract Structures o LBWG Delete Setup Tables o 0LI*BW Setup tables o RSA3 Check extraction and the data in Setup tables o LBWQ Check the extraction queue o LBWF Log for LO Extract Structures o RSA7 BW Delta Queue Monitor 3. How to create a connection with LIS InfoStructures? • LBW0 Connecting LIS InfoStructures to BW 4. What is the difference between ODS and InfoCube and MultiProvider? • ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI. • CUBE: Follows the star schema, we can only append data, ideal for primary reporting. • MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting. 5. What are Start routines, Transfer routines and Update routines? • Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine. • Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks. 6. What is the difference between start routine and update routine, when, how and why are they called? • Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets. 7. What is the table that is used in start routines? • Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table. 8. Explain how you used Start routines in your project? • Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine. 9. What are Return Tables? • When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee. 10. How do start routine and return table synchronize with each other? • Return table is used to return the Value following the execution of start routine

Page 61: BW - Prepared

61 | P a g e B W I V I E W

11. What is the difference between V1, V2 and V3 updates? • V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables). • V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks. o V1 & V2 don't need scheduling. • Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update. 12. What is compression? • It is a process used to delete the Request IDs and this saves space. 13. What is Rollup? • This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate. 14. What is table partitioning and what are the benefits of partitioning in an InfoCube? • It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning. 15. How many extra partitions are created and why? • Two partitions are created for date before the begin date and after the end date. 16. What are the options available in transfer rule? • InfoObject • Constant • Routine • Formula 17. How would you optimize the dimensions? • We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size. 18. What are Conversion Routines for units and currencies in the update rule? • Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos. 19. Can an InfoObject be an InfoProvider, how and why? • Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select "Insert characteristic as data target". For example, we can make 0CUSTOMER as an InfoProvider and report on it. 20. What is Open Hub Service? • The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.

21. How do you transform Open Hub Data? • Using BADI we can transform Open Hub Data according to the destination requirement. 22. What is ODS?

Page 62: BW - Prepared

62 | P a g e B W I V I E W

• Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables. 23. What are BW Statistics and what is its use? • They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management. 24. What are the steps to extract data from R/3? • Replicate DataSources • Assign InfoSources • Maintain Communication Structure and Transfer rules • Create and InfoPackage • Load Data 25. What are the delta options available when you load from flat file? • The 3 options for Delta Management with Flat Files: o Full Upload o New Status for Changed records (ODS Object only) o Additive Delta (ODS Object & InfoCube) Q) Under which menu path is the Test Workbench to be found, including in earlier Releases? What is Star Schema? In Star Schema model, Fact table is surrounded by dimensional tables. Fact table is usually very large, that means it contains millions to billions of records. On the other hand dimensional tables are very small. Hence they contain a few thousands to few million records. In practice, Fact table holds transactional data and dimensional table holds master data. The dimensional tables are specific to a fact table. This means that dimensional tables are not shared to across other fact tables. When other fact table such as a product needs the same product dimension data another dimension table that is specific to a new fact table is needed. This situation creates data management problems such as master data redundancy because the very same product is duplicated in several dimensional tables instead of sharing from one single master data table. This problem can be solved in extended star schema. What is extended star schema? In Extended Star Schema, under the BW star schema model, the dimension table does not contain master data. But it is stored externally in the master data tables (texts, attributes, hierarchies). The characteristic in the dimensional table points to the relevant master data by the use of SID table. The SID table points to characteristics attribute texts and hierarchies. This multistep navigational task adds extra overhead when executing a query. However the benefit of this model is that all fact tables (info cubes) share common master data tables between several info cubes. Moreover the SID table concept allows users to implement multi languages and multi hierarchy OLAP environments. And also it supports slowly changing dimension. What is slowly changing dimension? Dimensions those changes with time are called slowly changing dimension. What is fact table? Fact table is the collection if facts and relations that means foreign keys with the dimension. Actually fact table holds transactional data. What is dimension table? Dimension table is a collection of logically related descriptive attributes that means characteristics.

Page 63: BW - Prepared

63 | P a g e B W I V I E W

What is modeling? It is an art of designing the data base. The design of DB depends on the schema and the schema is defined as representation of tables and their relationships. What is an info cube? Info cube is structured as star schema (extended) where a fact table is surrounded by different dim table that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes. How many tables does info cube contain? Actually info cube contains two tables’ E table and F (fact) table. What is the maximum no. of dimensions in info cube? 16(3 are sap defines and 13 are customer defined) What are the minimum no of dimensions in info cube? 4(3 Sap defined and 1 customer defined). What are the 3SAP defined dimensions? The 3 SAP defined dimensions are….. 1. Data packet dimension (P)…..it contains 3characteristics.a) request Id (b) Record type (c) Change run id 2. Time dimension (T)….it contains time characteristics such as 0calmonth, 0calday etc 3. Unit Dimension (U)…it contains basically amount and quantity related units. What is the maximum no. of key figures? 233 What is the maximum no. of characteristics? 248 What is the model of the info cube? Info cube model is extended star schema. What are the data types for the characteristic info object? There are 4types: 1. CHAR 2. NUMC 3. DATS 4. TIMS How you’ll write date in BW? YYYYMMDD

Page 64: BW - Prepared

64 | P a g e B W I V I E W

1. What is SAP? SAP - Systems, Applications and products in data processing. 2. What is transaction code? A transaction code is a four-character command that tells the system location of a task 3. What is /n stands for? End current system task and go to new system task. 4. What is /o stands for? Create a new session and go to new system task without closing prior session. 5. What is BW? The SAP Business Information Warehouse allows you to analyze data from operative SAP applications as well as all other business applications and external data sources such as databases, online services and the Internet. 6.What is infoobject? Business evaluation objects (for example, customers, and sales) are referred to as InfoObjects in BW. InfoObjects are the smallest components in BW. They are used to structure the information that is needed to create larger BW objects, such as InfoCubes or ODS Objects. 7.What is Datasource? DataSources are flat data structures containing data that logically belongs together. They are responsible for extracting and staging data from various source systems. 8.what is infosource? InfoSources are the group of InfoObjects that belong together from a business point of view. It contains the transactional data obtained from the transactions in online transactional processes (OLTP) and master data such as addresses of customers and organizations, which remain unchanged for longer time period. An InfoSource is a quantity of information that logically belongs together, summarized into a single unit. InfoSources contain either transaction data or master data (attributes, texts and hierarchies).

1. What is ERP? - ERP is a package with the techniques and concepts for the integrated management of business as a whole, for effective use of management resources, to improve the efficiency of an enterprise. Initially, ERP was targeted for manufacturing industry mainly for planning and managing core business like production and financial market. As the growth and merits of ERP package ERP software is designed for basic process of a company from manufacturing to small shops with a target of integrating information across the company. 2. Different types of ERP? - SAP, BAAN, JD Edwards, Oracle Financials, Siebel, PeopleSoft. Among all the ERP’s most of the companies implemented or trying to implement SAP because of number of advantages aver other ERP packages. 3. What is SAP? - SAP is the name of the company founded in 1972 under the German name (Systems, Applications, and Products in Data Processing) is the leading ERP (Enterprise Resource Planning) software package. 4. Explain the concept of “Business Content” in SAP Business Information Warehouse? - Business Content is a pre-configured set of role and task-relevant information models based on consistent Metadata in the SAP Business Information Warehouse. Business Content provides selected roles within a company with the information they need to carry out their tasks. These information models essentially contain roles,

Page 65: BW - Prepared

65 | P a g e B W I V I E W

workbooks, queries, InfoSources, InfoCubes, key figures, characteristics, update rules and extractors for SAP R/3, mySAP.com Business Applications and other selected applications. 5. Why do you usually choose to implement SAP? - There are number of technical reasons numbers of companies are planning to implement SAP. It’s highly configurable, highly secure data handling, min data redundancy, max data consistency, you can capitalize on economics of sales like purchasing, tight integration-cross function. 6. Can BW run without a SAP R/3 implementation? - Certainly. You can run BW without R/3 implementation. You can use pre-defined business content in BW using your non-SAP data. Here you simply need to map the transfer structures associated with BW data sources (InfoCubes, ODS tables) to the inbound data files or use 3rd part tool to connect your flat files and other data sources and load data in BW. Several third party ETL products such as Acta, Infomatica, DataStage and others will have been certified to load data in BW. 7. What is IDES? - International Demonstration and Education System. A sample application provided for faster learning and implementation. 8. What is WF and its importance? - Business Work Flow: Tool for automatic control and execution of cross-application processes. This involves coordinating the persons involved, the work steps required, the data, which needs to be processed (business objects). The main advantage is reduction in throughput times and the costs involved in managing business processes. Transparency and quality are enhanced by its use. 9. What is SAP R/3? - A third generation set of highly integrated software modules that performs common business function based on multinational leading practice. Takes care of any enterprise however diverse in operation, spread over the world. In R/3 system all the three servers like presentation, application server and database server are located at different system. 10. What are presentation, application and database servers in SAP R/3? - The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server. All the data are stored in a centralized server. This server is called database server. 11. What should be the approach for writing a BDC program? - Convert the legacy system data to a flat file and convert flat file into internal table. Transfer the flat file into sap system called “sap data transfer”. Call transaction(Write the program explicitly) or create sessions (sessions are created and processed ,if success data will transfer). 12. Explain open SQL vs native SQL? - ABAP Native SQL allows you to include database-specific SQL statements in an ABAP program. Most ABAP programs containing database-specific SQL statements do not run with different databases. If different databases are involved, use Open SQL. To execute ABAP Native SQL in an ABAP program, use the statement EXEC. Open SQL (Subset of standard SQL statements), allows you to access all database tables available in the R/3 System, regardless of the manufacturer. To avoid conflicts between database tables and to keep ABAP programs independent from the database system used, SAP has generated its own set of SQL statements known as Open SQL. 13. What are datasets? - The sequential files (processed on application server) are called datasets. They are used for file handling in SAP. 14. What are internal tables check table, value table, and transparent table? - Internal table: It is a standard data type object, which exists only during the runtime of the program. Check table: Check table will be at field level checking. Value table: Value table will be at domain level checking ex: scarr table is check table for carrid. Transparent table: - Exists with the same structure both in dictionary as well as in database exactly with the same data and fields. 15. What are the major benefits of reporting with BW over R/3? Would it be sufficient just to Web-enable R/3 Reports? - Performance — Heavy reporting along with regular OLTP transactions can produce a lot of load both on the R/3 and the database (cpu, memory, disks, etc). Just take a look at the load put on your system during a month end, quarter end, or year-end — now imagine that occurring even more frequently. Data analysis — BW uses a Data Warehouse and OLAP concepts for storing and analyzing data, where R/3 was designed for transaction processing. With a lot of work you can get the same analysis out of R/3 but most likely would be easier from a BW. 16. How can an ERP such as SAP help a business owner learn more about how business operates? - In order to use an ERP system, a business person must understand the business processes and how they work together from one functional area to the other. This knowledge gives the student a much deeper understanding of how a business operates. Using SAP as a tool to learn about ERP systems will require that the people understand the business processes and how they integrate.

Page 66: BW - Prepared

66 | P a g e B W I V I E W

17. What is the difference between OLAP and Data Mining? - OLAP - On line Analytical processing is a reporting tool configured to understand your database schema ,composition facts and dimensions . By simple point-n-clicking, a user can run any number of canned or user-designed reports without having to know anything of SQL or the schema. Because of that prior configuration, the OLAP engine “builds” and executes the appropriate SQL. Mining is to build the application to specifically look at detailed analyses, often algorithmic; even more often misappropriate called “reporting. 18. What is “Extended Star Schema― and how did it emerge? - The Star Schema consists of the Dimension Tables and the Fact Table. The Master Data related tables are kept in separate tables, which has reference to the characteristics in the dimension table(s). These separate tables for master data is termed as the Extended Star Schema. 19. Define Meta data, Master data and Transaction data - Meta Data: Data that describes the structure of data or MetaObjects is called Metadata. In other words data about data is known as Meta Data. Master Data: Master data is data that remains unchanged over a long period of time. It contains information that is always needed in the same way. Characteristics can bear master data in BW. With master data you are dealing with attributes, texts or hierarchies. Transaction data: Data relating to the day-to-day transactions is the Transaction data. 20. Name some drawbacks of SAP - Interfaces are huge problem, Determine where master data resides, Expensive, very complex, demands highly trained staff, lengthy implementation time. 21. What is Bex? - Bex stands for Business Explorer. Bex enables end user to locate reports, view reports, analyze information and can execute queries. The queries in workbook can be saved to there respective roles in the Bex browser. Bex has the following components: Bex Browser, Bex analyzer, Bex Map, Bex Web. 22. What are variables? - Variables are parameters of a query that are set in the parameter query definition and are not filled with values until the queries are inserted into workbooks. There are different types of variables which are used in different application: Characteristics variables, Hierarchies and hierarchy node, Texts, Formulas, Processing types, User entry/Default type, Replacment Path. 23. What is AWB?. What is its purpose? - AWB stands for Administrator WorkBench. AWB is a tool for controlling, monitoring and maintaining all the processes connected with data staging and processing in the business information whearhousing. 24. What is the significance of ODS in BIW? - An ODS Object serves to store consolidated and debugged transaction data on a document level (atomic level). It describes a consolidated dataset from one or more InfoSources. This dataset can be analyzed with a BEx Query or InfoSet Query. The data of an ODS Object can be updated with a delta update into InfoCubes and/or other ODS Objects in the same system or across systems. In contrast to multi-dimensional data storage with InfoCubes, the data in ODS Objects is stored in transparent, flat database tables. 25. What are the different types of source system? - SAP R/3 Source Systems, SAP BW, Flat Files and External Systems. 26. What is Extractor? - Extractors is a data retrieval mechanisms in the SAP source system. Which can fill the extract structure of a data source with the data from the SAP source system datasets. The extractor may be able to supply data to more fields than exist in the extract structure.

1) Please describe your experience with BEx (Business Explorer) A) Rate your level of experience with BEx and the rationale for you’re self-rating B) How many queries have you developed? : C) How many reports have you written? D) How many workbooks have you developed? E) Experience with jump targets (OLTP, use jump target) F) Describe experience with BW-compatible ETL tools (e.g. Ascential) 2) Describe your experience with 3rd party report tools (Crystal Decisions, Business Objects a plus) 3) Describe your experience with the design and implementation of standard & custom InfoCubes. 1. How many InfoCubes have you implemented from start to end by yourself (not with a team)? 2. Of these Cubes, how many characteristics (including attributes) did the largest one have. 3. How much customization was done on the InfoCubes have you implemented? 4) Describe your experience with requirements definition/gathering. 5) What experience have you had creating Functional and Technical specifications? 6) Describe any testing experience you have: 7) Describe your experience with BW extractors 1. How many standard BW extractors have you implemented? 2. How many custom BW extractors have you implemented? 8) Describe how you have used Excel as a compliment to BEx

Page 67: BW - Prepared

67 | P a g e B W I V I E W

A) Describe your level of expertise and the rationale for your self-rating (experience with macros, pivot tables and formatting) B) 9) Describe experience with ABAP 10) Describe any hands on experience with ASAP Methodology. 11) Identify SAP functional areas (SEM, CRM, etc.) you have experience in. Describe that experience. 12) What is partitioning and what are the benefits of partitioning in an InfoCube? A) Partitioning is the method of dividing a table (either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of the fields in the table. By partitioning an infocube, the reporting performance is enhanced because it is easier to search in smaller tables. Also table maintenance becomes easier. 13) What does Rollup do? A) Rollup creates aggregates in an infocube whenever new data is loaded. 14) What are the inputs for an infoset? A) The inputs for an infoset are ODS objects and InfoObjects (with master data or text). 15) What internally happens when BW objects like Info Object, Info Cube or ODS are created and activated? A) When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use. 16) What is the maximum number of key fields that you can have in an ODS object? A) 16. 17) What is the specific advantage of LO extraction over LIS extraction? A) The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management. 18) What is the importance of 0REQUID? A) It is the InfoObject for Request id. OREQUID enables BW to distinguish between different data records. 19) Can you add programs in the scheduler? A) Yes. Through event handling. 20) What is the importance of the table ROIDOCPRMS? A) It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed. 21) What is the importance of 'start routine' in update rules? A) A Start routine is a user exit that can be executed before the update rule starts to allow more complex computations for a key figure or a characteristic. The start routine has no return value. Its purpose is to execute preliminary calculations and to store them in a global data structure. You can access this structure or table in the other routines. 22) When is IDOC data transfer used? A) IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes. 23) What is partitioning characteristic in CO-PA used for? A) For easier parallel search and load of data. 24) What is the advantage of BW reporting on CO-PA data compared with directly running the queries on CO-PA? A) BW has a better performance advantage over reporting in R/3. For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing. 25) What is the function of BW statistics cube? A) BW statistics cube contains the data related to the reporting performance and the data loads of all the InfoCubes in the BW system. 26) When an ODS is in 'overwrite' mode, does uploading the same data again and again create new entries in the change log each time data is uploaded? A) No. 27) What is the function of 'selective deletion' tab in the manage->contents of an infocube? A) It allows us to select a particular value of a particular field and delete its contents. 28) When we collapse an infocube, is the consolidated data stored in the same infocubeinfocube? or is it stored

Page 68: BW - Prepared

68 | P a g e B W I V I E W

in the new A) Data is stored in the same cube. 29) What is the effect of aggregation on the performance? Are there any negative effects on the performance? A) Aggregation improves the performance in reporting. 30) What happens when you load transaction data without loading master data? A) The transaction data gets loaded and the master data fields remain blank. 31) When given a choice between a single infocube and multiple InfoCubes with a multiprovider, what factors does one need to consider before making a decision? A) One would have to see if the InfoCubes are used individually. If these cubes are often used individually, then it is better to go for a multiprovider with many cubes since the reporting would be faster for an individual cube query rather than for a big cube with lot of data. 32) How many hierarchy levels can be created for a characteristic info object? A) Maximum of 98 levels. 33) What is open hub service? A) The open hub service enables you to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, you can ensure controlled distribution using several systems. The central object for the export of data is the Infospoke. Using this, you can define the object from which the data comes and into which target it is transferred. Through the open hub service, SAP BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system. 34) What is the function of 'reconstruction' tab in an infocube? A) It reconstructs the deleted requests from the infocube. If a request has been deleted and later someone wants the data records of that request to be added to the infocube, one can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the infocube. 35) What are secondary indexes with respect to InfoCubes? A) Index created in addition to the primary index of the infocube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes. 36) What is DB connect and where is it used? A) DB connect is database connecting piece of program. It is used in connecting third party tools with BW for reporting purpose. 37) Can we extract hierarchies from R/3 for CO-PA? A) No We cannot, “NO hierarchies in CO/PA�?. 38) Explain ‘ field name for partitioning’ in CO-PA A) The CO/PA partitioning is used to decrease package size (eg: company code) 39) What is V3 update method ? A) It is a program in R/3 source system that schedules batch jobs to update extract structure to data source collectively. 40) Differences between serialized and non-serialized V3 updates 41) What is the common method of finding the tables used in any R/3 extraction A) By using the transaction LISTSCHEMA we can navigate the tables. 42) Differences between table view and infoset query A) An InfoSet Query is a query using flat tables. 43) How to load data from one InfoCube to another InfoCube ? A) Thro DataMarts data can be loaded from one InfoCube to another InfoCube. 44) What is the significance of setup tables in LO extractions ? A) It adds the Selection Criteria to the LO extraction. 45) Difference between extract structure and datasource A) In Datasource we define the data from diff source sys,where as in extract struct it contains the replicated data of datasource n where in we can define extract rules, n transfer rules B) Extract Structure is a record layout of InfoObjects. C) Extract Structure is created on SAP BW system. 46) What happens internally when Delta is Initialized 47) What is referential integrity mechanism ? A) Referential integrity is the property that guarantees that values from one column depend on values from another column.This property is enforced through integrity constraints. 48) What is activation of extract structure in LO ? 49) What is the difference between Info IDoc and data IDoc ?

Page 69: BW - Prepared

69 | P a g e B W I V I E W

50) What is D-Management in LO ? A) It is a method used in delta update methods, which is based on change log in LO.

How to do performance tuning on InfoCube??

I have a report that was generating results a couple of days back but when I am trying to run it today, am getting a timeout error..can someone suggest how I can fine tune the infocube to get the result?

-----------------------------------------------

Your question is very general and performance is a big, important and difficult topic in BW. First steps to improve performance. Go to Cube Management -> Tab Performance: Create Index Create Statistics

On Tab -> Rollup you can create Aggregates. To learn more about aggregates check http://service.sap.com for Document "Performance Tuning for Queries with Aggregates"

-----------------------------------------------

Try this hope it will solve your problem of report .

Steps are like this :-

1)Turn on the BW Statistics: RSA1, choose Tools -> BW statistics for InfoCubes (Choose OLAP and WHM for your relevant Cubes) 2)Check whether you have overall query performance problem or Single Query Performance problem

a)overall query performance problem Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all InfoCubes. You need to run ST03N in expert mode to get these values

2)Single/specific Query performance TX- ST03N

Use Details to get the runtime segments

Possible causes for the performance

A) High Database Runtime B) High OLAP Runtime C) High Frontend Runtime

Depending upon your analysis

A)Strategy - High Database Runtime Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)

Page 70: BW - Prepared

70 | P a g e B W I V I E W

Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)

Check if the read mode of the query is unfavourable - Recommended (H)

B)Strategy - High OLAP Runtime

Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells") a) Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred. b) Check if a user exit Usage is involved in the OLAP runtime? c) Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.

C)Strategy - High Frontend Runtime 1)Check if a very high number of cells and formattings are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime. 2) Check if frontend PC are within the recommendation (RAM, CPU Mhz) 3) Check if the bandwidth for WAN connection is sufficient.

Good Luck

Suggestions for Automating Loads

I believe that this is a new function within BW 3.0.. The DataSource now have an option "DataSource transfers double data records" on the "Processing" tab. Then there is a flag to "Ignore double data records".

Not sure of the workaround for lesser versions of BW.

Two Topics: Automating your InfoCube deletes, and using an ODS to track deltas.

TOPIC 1 ------- If you do not COMPRESS the current fiscal period's data in your target cube (which eliminates the Request ID,) you can automate the process of deleting the prior request when you reload your data every day. You accomplish this through configuration on the InfoPackage with which you load the data:

A) Go in to InfoPackage maintenance

B) Click the Data Targets tab.

C) For your selected data target, click the icon which looks like the Greek sigma, under the column labeled "Automatic loading of similar/identical requests in the InfoCube." You can also type DELE in the command field. A popup window entitled "Deleting Request from InfoCube after Update" will appear.

D) On the popup window, you have the option of selecting "Always delete existing requests if delete conditions are found" and "Only for the same selection conditions." If you select these options, then BW will automatically delete (or reverse if you have aggregates) any prior request for the same selection criteria which has not been compressed.

Page 71: BW - Prepared

71 | P a g e B W I V I E W

You can configure your InfoPackage in this manner, and then compress the request when you go to a new fiscal period and no that you will not load any more data for the prior fiscal period.

TOPIC 2 ------- It sounds like you may be reloading every day in order to catch changes to the data. You may know this already, but just in case: You can use an ODS object to capture deltas, and then send only the changes to your cube. This eliminates the need to delete the prior request from your cube, and simplifies the use of cube compression and aggregates.

You then would load the current period's full set of data into your ODS every day, and the built-in functionality of the ODS object would detect the differences and send only these on to your target cube. You can read more about this scenario in the white paper at service.sap.com/bw entitled "Data Staging Scenarios."

Finally, using an ODS object as the data staging area eliminates (I think) your issues with the PSA. Instead of having your application read the PSA, have it read from an ODS object instead. Every ODS object has a unique key, so you won't get duplicate records as you can with a PSA. You can also report on ODS data in BEx queries, if you have any need to, which is yet another advantage this method has over the use of PSA.

We use Cube 0COOM_CO1, which has a time characteristic of fiscal period. We manually load the current period into this cube daily. But to ensure that we do not have duplicate data in the cube we manually delete the previous day's request before loading the period again. There has to be an easier way to do this, any suggestions on how to automate this process?

Also, the same with the PSA. We have an application reading the PSA, and to avoid duplicate records in the PSA, we are deleting the PSA load before loading it again the next day. Unfortunately with the PSA, the only way to delete a specific request that I can find is to go into the request and mark it as a status of NOT OK, then delete all requests with errors.

You can automate the data deletion process from the InfoPackage. If you look under the "Data Targets" tab of the InfoPackage, there is a Checkbox to "delete the entire content of the data target". Setting this checkbox will ensure that the data is deleted before the new load.

About the PSA, there is feature by which data can be deleted from the PSA. Here are the steps for this - 1. Go to the PSA menu under the Admin. Workbench. 2. Navigate to the DataSource for which you want to the set the deletion criteria. 3. Highlight the DataSource and right-click. You will see "Delete PSA Data" in the context menu. Select that. 4. On the next screen, you can now maintain the deletion parameters. The deletion of PSA data can be done based on Date or Days. You can also select successful & error requests for deletion. 5. Create a background job after setting the deletion parameters and schedule it.

Removing Excel Macros After Saving as Excel Workbook

Is there any way to remove the macros from a BEX query after you save it as a native Excel workbook?

In other words the results are saved in native Excel but then when you use your right mouse click for the context menus it still shows the BEX function menu instead of the Excel menus. For example suppose I highlight some cells and then right-click I don't see the standard

Excel context menus like format cells. I still see the BEX functions. I know I can use the Excel menus at the top, but I would like to eliminate the BEX functions so that the spreadsheet operates just like a native Excel workbook.

Any ideas?

Page 72: BW - Prepared

72 | P a g e B W I V I E W

You can eliminate the BEX menu on right mouse click: SAP BEX Menu / Setting / OLAP Functions with right-click.

Also, you can detach the result from the query: SAP BEX Menu / Tools / All queries in Workbook / Detach.

I am a new entry into SAP BIW...I am interested in knowing what the suite is. Would like more information on the following lines.I have a set of questions also. It will be most helpful if anyone puts some technical light on these questions. Thanks in advance.

A) REPORTING : ==============================

1. The BEx Analyzer brings in the following basic questions : a) Is it good enough?? b) Can it compete with BO, Cognos, Microstrategy, ,Brio, SAP BIW ... If so how is it ???What are the differences. c) Is it more powerful than crystal reports,excel, access... d) What are the pros and cons of tool.

2. Does the *.xls report store data of the entire quiery or does it store the view.

3. Offline processing is not possible since the OLAP processor at the server side does the processing. IS THIS TRUE.

4. No Drill down (OLAP analysis to next level ) and Drill(Roll) up feature.IS THIS TRUE. IS IT POSSIBLE TO ACCOMPLISH THIS WITH CODING??

5. Can one report be linked to a completely different report with passage of variables from the first report, which is opened by clicking on a cell. Basically Drill Through. DOES THIS NEED CODING??

6. Is it possible to drill without altering the report i.e. I could display a list of my products followed by a number of measures, I could then see how the measures change through different restrictions placed higer up the hierarchy - Basically Drill by I could choose to restrict by region, promotion etc DOES THIS NEED CODING??

7. The Slice and Dice feature that is available is not flexible (no drag and drop, have to go to quiery pane) when compared to the third party tools.

8. Can usage of filtering, conditional formatting and alerts, sorting, grouping etc. be performed. WHICH OF THESE NEED CODING

9. Is it possible to conditionally hide a row based on a prompt without filtering. Ex : If my prompt says show totals, then only the row containing totals should be shown. DOES THIS NEED CODING

10. The reports generated by Business Explorer Analyzer is available only in *.xls format. This has an added limitation in that the *.xls document can be easily edited, knowingly or unknowingly, hence making the report insecure.

11. Can data from a report be used as a source for generating another report.

12. Is SAP BIW aggregate aware.

Page 73: BW - Prepared

73 | P a g e B W I V I E W

13. Is there any SDK for customisation.

14. a) Is it possible to use User defined variables in reports. b) Should the user defined variable be defined in the report or in the BIW and r these possible: c) Conditional operators (if... else) support in user defined variables in reporting layer. d) Utilitzing prompt value in "User defined variable" calculation e) Using one "User defined variable" in a calcuation of another. WHICH OF THE ABOVE NEEDS CODING.

15.Is it possible to use Prompts. If so are the following functionalities avalable: a) Cascading Prompts b) Prompts independent of filtering c) Prompts capable of displaying one item (Product Name) and passing other (Product Id). d) Usage of wild card search in Prompts WHICH OF THE ABOVE NEEDS CODING.

16. Can multiple individual queries be used in a single report.

17. Can the user view and edit the query generated.

B) USING THIRD PARTY "BUSINESS OBJECTS" OLAP TOOL : =============================================== 1. Does this enforce that BO with its "Connect" has to quiery thru BEx Analyzer. Or can it directly access the star schema of SAP BIW. 2. Do the other tools (Cognos, Brio) etc. behave the same way?

I guess it is a very vast set of questions. But I also know that true technicians love to answer those. Any further information is most welcome.....

Reply : Subject : Reporting capabilities of SAP BIW

I don't understand what r u trying to get here????? Are u asking a question (sorry 19 questions!!!!) or just want somebody to type in and send the whole BW manual???? I think this discussion group is for troubleshooting and solving problems and not for training ..... So I suggest you take some BW classes (specially on Reporting) and you may also go to SAP/bw website to learn......

I hope this helps.... I am not being mean! but this is just not practical way of getting help. Good luck.

Reply : Subject : Reporting capabilities of SAP BIW

Hi,

I completely agree.

You are really really serious abt the answers, attend the SAP Training courses, TABW10 and TABW20. TABW30 may not add much value to you, as viewed from your questions.

Reply : Subject : Reporting capabilities of SAP BIW

Hello,

Page 74: BW - Prepared

74 | P a g e B W I V I E W

I tried to answer for some your questions. But I dont know the level you got educated in BW. From the first question itself it is clear that you dont know anything about SAP BW. So, if u dont understand anything from my answer just forget it and follow Mr. Bala's idea.

Note to others: If anybody found that my answer is incorrect or any alternative answers, please let me know the same.

Thanks --------------- Answer for your question as per the order you typed 1. a) Yes b) It's a reporting tool for SAP BIW (Presentation Layer) c) ? 2. If you save *.xls it contains only the data or report data presented by a query 3. True 4. False...It is possible to do drilldown 5. Yes possible thru R/RI....you need to create a link 6. Yes you could 7. ? 8. Yes...But you dont need coding 9. Yes it is possible 10.Actually it uses excel as a tool. It means it has been used for presentation to the user convenience. So that they can edit and store as a excel file and it's not a repository. All meta data are stored in BW server in BW meta data repository 11. See answer for the question number 5 12. Yes...certainly 13. Yes use BAPI (another example is ODBO) 14. a) Use AWB ( I believe the transaction is RSZV) b) ? c) No conditional operators. But you can perform calculation like SUM, AVERAGE, MAX .... d) Yes e) Yes 15. ? 16. Multiple queries can be used with the Excel Workbook level. That means you can insert a query in a worksheet and such a way you can insert in another too. 17. It is purely based on user roles. If u given the access, yes they can.. 18. B) Third party tool access thru BAPI interface(One of its component is ODBO)

Hello. .......based on your questions, I conclude that you have the knowledge in data warehousing or you are working with other data warehousing software(s). As far as concerned with me.....SAP BIW is the robust one than any other data warehousing software available with the market. I would like to tell you one thing, most of your questions are repetitive or the answer for that are same. Anyway I tried to answer for your questions, the ? means that either I dont know the answer or I couldn't understand your question. Write me about your current job, age and position.

Hope this would help you to know something about BW

Question : Subject : Loading Transactional Data

Good day,

I am trying to load tranx data into the PSA for infocube 0HR_PA_OS_1, my problem is that the request runs 7 hours and then fails with the error "Processing is overdue". Is there anyway I can have a look on the R/3 side if the job is actually doing something ? I have tried that but we are using aleremote and I don't know which is my job. Is there anyway to link my job I started on BW to the PID on R/3 ?

Page 75: BW - Prepared

75 | P a g e B W I V I E W

Thank you

Reply : Subject : Loading Transactional Data

Try running the extractor checker RSA3 and see if any records are actually extracted ion R/3 side

Regards,

Reply : Subject : Loading Transactional Data

Hi,

First thing to do is to use transaction RSA3 in SAP R/3 for the data source in question and execute to see if the extractor is working in first place. If you manage to see the output, the error is more in transmission of records.

Second, is to look at the IDOC lists in R/3 ( transaction we07). If there are any application or basis errors, it will be listed as error IDOCs, you can use this to find the error.

Third, execute the extraction using the infopackage and log into R/3 using your normal user id and jump to transaction sm50 or sm51 (depending on no. of servers) and monitor the process for user "aleremote". you will notice the process progress. If the read is very slow, depending on the database there could be number of simple performance enhancers which Basis can help you with.

Fourth, make sure the network connection between R/3 and BW is OK.

Fifth, if everything is alright and docs arrive in PSA and if it takes long time in PSA, then it is a pure PSA update issue. Have you checked the shortdump overview (ST22 ) or read the system log "SM21" ? This should give you clues.

Hope this helps

Question : Subject : Customizing for extraction

Hello:

How can I know the name of the table or view of a R/3 extractor?

Reply : Subject : Customizing for extraction

In the source system, view the contents of table ROOSOURCE. Find the row where OLTPSOURCE = your datasource name. The column EXTRACTOR contains the name of the extractor program. View the program in SE38 or SE37 to see what it does.

You can also try looking up the EXTRACTOR program name in table D010TAB (program/table cross-reference,) although you need to take INCLUDEs from the extractor program into account.

Reply : Subject : Customizing for extraction

Page 76: BW - Prepared

76 | P a g e B W I V I E W

NOTE about the message above... if EXMETHOD column in table ROOSOURCE contains a V, then column EXTRACTOR contains the name of the table or view from which the datasource extracts data. If the EXMETHOD contains F1, then the EXTRACTOR column contains the name of the extractor program, as described below.

Reply : Subject : Customizing for extraction

RSO2 ...give datasource name & display ...

Regards,

Reply : Subject : Customizing for extraction

Thank you very match to everybody!!!

The RSO2 is perfect for my question.

Question : Subject : Standard Infocubes for SD

Hi all,

In the SAP BW Standard business content the Infocubes related to Sales & Distribution refer to LIS as datasources. Our R/3 OLTP system is not configured for LIS updates and non of the LIS in OLTP is activated for the data.

Now we have 2 options - 1. Modify the OLTP system for Activation of LIS and then use them as datasource. 2. Create generic datasource and custom defined Infocubes.

My question is - 1. How much efforts will be required for activation and configuration of LIS in R/3 OLTP We are currently on 4.0B R/3 system and system has data for last 2 years .

2. By opting for 2nd option I loose the advantage of Delta upload capability of BW but I save on modifying efforts of R/3 OLTP system.

Any suggestions/comments on both these approaches will be highly appreciated.

Regards,

Reply : Subject : Standard Infocubes for SD

Use BW specific LIS information structures s260, s261 & s262. These are BIW specific info. structures and only thing that u need to do

Is perform statistical update in SBIW before pulling data in BW.

Reply : Subject : Standard Infocubes for SD

Page 77: BW - Prepared

77 | P a g e B W I V I E W

Have you considered the new extractors for SD-data (Sources 2LIS_11_VAHDR etc.)?

This extractor does not require LIS to be activated, nor any knowledge of LIS (big advantage). I'm currenlty installing these extractors, and find them to be a reliable source that are easy to get running.

Do however take some days to find out the logic behind it.

Good luck!

Reply : Subject : Standard Infocubes for SD

If you are using BW 2.0b or higher you can use infosource 2LIS_13_VDITM, this does not requiere any LIS configuration.

You have to activate it using tx SBIW.

If you need more information don doubt to ask.

Regards

Reply : Subject : Standard Infocubes for SD

BW 2.0B business content for SD contains both LIS and standard infosources (LO Delta). The only purpose of providing LIS infsources is that you could do your first data load (full) using LIS structures and then start delta loading via LO Delta method. or you can totally ignore LIS infosources and use LO init/delta infocources (names start like this 2LISxxxxxxx), which is better and easy. Refer to BW documentation and also there is white paper on SD extraction procedures.....

Thanks

Reply : Subject : Standard Infocubes for SD

Hi,

You do not need to activate any LIS structures.

Use transaction LBWE (logistics cockpit). Here you will find extractstructures in the logistics area. You are able (with developers key) to enhance the extract structures. Afterwards you should regenerate the DataSources, activate the extract structure.

Via SBIW you can initialise the extract structures per application area with historical data and check the results via transaction RSA7.

Regards,

Question : Invalid source system name entered

I'm trying to create an ODS object for the first time. I can drag key fields and data fields without any problem - I have also created infocubes in this system without any problem -

Page 78: BW - Prepared

78 | P a g e B W I V I E W

I do a check on the ODS object and receive "ODS object ACTCTODS is consistent" all green lights

However when I try to do "active" I receive a red light - "Invalid source system name entered"

How and where do I create a valid source system in an ODS?

For the infocube I have one but that is in the "Source Systems" area and not in the "Data Targets" where I'm trying to create the ODS?

Thank you for your help.

Reply : Invalid source system name entered

Make sure that a BW source system has been set properly in the Source system tab of Admin Workbench. Bw system, should be created as a source system of itself. If this has already been created, Rt click and do a check on the BW source system (in Source system tab). Ask basis to correct errors if any.

This should resolve the ODS activation problem.

Thanks.

Reply : Invalid source system name entered

Thank you for your reply - I have checked the tab and Rt clicked and everything seems O.K on "Source Systems" - I can create Infocubes with the Source System but not a ODS - any ideals??????

Reply : Invalid source system name entered

Here's a possible reason. When an ODS object is activated, BW automatically generates an export datasource with the name 8<name of ODS object>. This datasource has a source system and a target system. Apparently, something is wrong with the source system name, which can be either MYSELF or the name associated with your active client in table T000. If the generation of the export datasource fails, the ODS object cannot be activated.

The following are possible causes of the activation failure of an export datasource: * authorizations for ODS are ok but not for the datasource (check transaction SU53) * export datasource cannot be generated because of inconsistent system name

There are some OSS notes on naming inconsistencies in BW systems. As a result of client copies, inconsistencies can occur between * RFC destinations (check transaction SM59) * logical system names (check table TBDLS) * the logical system your BW system knows itself by (check table T000) * the logical system connected with the MYSELF system

Page 79: BW - Prepared

79 | P a g e B W I V I E W

If your BW is the result of a system or database copy, did you call transaction BDLS?

My best advice is to search OSS with the keyword MYSELF.

Best regards,

etails of infocubes 0UCSA_C01, 0UCTS_C15

Please someone give me the details of the below mentioned infocubes 0UCSA_C01, 0UCTS_C15. THESE CUBES ARE ONLY AVAILABLE FROM VERSIONS 3.0D. Please send me the copy of charectarstics and key figures, technical name.

This is all I have about it: Technical name: 0UCTS_C15

This InfoCube provides the data basis for analyzing billing-related installations/removals, that is, when the devices were allocated to a utility installation. The data is recorded on a monthly basis.

Technical name: 0UCSA_C01

This InfoCube allows you to analyze the services for which a utilities company has billed. This includes evaluated consumption quantities and the revenue obtained for them. They are classified according to component such as energy, demand and rental price.

You can use the posting date or consumption month as time dimensions for the consumption and revenue data. In addition, you can analyze the data according to various criteria. For example, you can drill down according to attributes of the business partner, premise, contract, rate or according to the industry and regional aspects. You can also visualize consumption and revenue data for divisions other than those offered.

The number of billed contracts provides an important key figure and allows the achieved and outstanding demands to be estimated. You can also store discount and surcharge information relevant to amounts and quantities in separate key figures.

Invoiced billing documents provide the basis for sales statistics. To archive the data after the consumption month, the consumptions and achieved revenues are transferred to the Business Information Warehouse (BW) using a weighting procedure for each consumption month.

Sales Statistics extractors - which one is the right one??

Currently working on a IS-U CCS BW project. We are designing a solution based on the Sales Statistics cube 0UCSA_C01. We have to enhance the extract structure with a number of fields and are faced with a choice of two enhancements. The first is RSAP0001 and we could use the function module EXIT_SAPLRSAP_001 for the transaction data. The second is BWESTA01 and in this case we could use function module EXIT_SAPLE71ESTA01_001. It is not obvious to me which enhancement I should use. I have been told that the second enhancement may be the way to go as the first enhancement effectively repeats what the second one does, which means it is extracting the data twice. Obviously for performance reasons this is very bad.

We are on 2.1C and I did not see the second enhancement. Is it in 3.0B?? If it is yes, then I presume the second option is the latest and you should be able to use it.

Page 80: BW - Prepared

80 | P a g e B W I V I E W

Also, I do not think two enhancements are supported at the same time. Could you verify this and let us know.

yeah, we are on 3.0b now. When we built the initial solution we were on 2.1c and it was a complete surprise when an sap consultant told us we were using the wrong enhancement.

I wonder if projects should be aware of this when they upgrade. We sure as hell would not have picked it up if not shown by somebody in the know. The problem with the new enhancement is that the function module seems to be using different tables as well, so we can't simply cut and paste the code. The plus side is that performance will improve.....so we have been told

Excellent ! This is a very good feed back.

I can't find BWESTA01 enhancement in my system. We are in BW30B and R/3 4.6C PI 2001.2. but not using Industry solution.

I guess this enhancement is in R/3. right? let me know your PI version and R/3 version.

Try having a look in the Object Navigator (SE80) under Dev Class EE71_R461 to find BWESTA01. We have PI version 2002.1.

Replacement Path in Variable Reporting

What is the exact functionality of replacement path in Variable reporting. What exactly it replaces?

Replacement path processing type is used in a variable when you wish to get a value from attributes of another character or from a query. Example:- if you wish to display the age of a customer in your report which will not be in the cube. I mean Age is attribute of customer and not part of the cube. But each transaction record in the cube is assigned with a customer field. So ..if we want to get this value into our report .. we create a variable on customer with replacement path and use its attribute AGE . So when you use this variable in your query for any thing you get the age of the customer associated with that transaction data You can find one more example: Lets assume we are using a text variable in the Query description (-->Query properties) description is : 'Sales Qty report'

Suppose we have Fiscal year/Period in our Query and variable on it(characteristic).

Now your requirement is to display Fiscal year/period along with description of the Query. For that you need to create a text variable and use it...

While creating text variable, you have to choose processing type as 'replacement path' and characteristic as 'fiscal year/period' and replace variable with as 'key or text'....

and place this text variable into Query description (from Query properties)...

Suppose user selects some value for fiscal year period, same period will be displayed along with description.

Page 81: BW - Prepared

81 | P a g e B W I V I E W

Regarding Repairful Request

What is the exact functionality of a repairful request?

1. Repair Full Request is used to load data as Full Upload when the delta initialization is already done

2. Used to load missing data / specific corrupt data

3.And third one, I think wavered a little is when you are doing full load on the infoprovider for which the delta initialization is already done then you need to tick the check box of repair option from the scheduler menu Repaire full request prefarably to ODS in Overwrite mode only.... You will get this option for infocube infopackage also... but not supposed to use(data will be added)

*-- Soujanya

Repair full is used when an ODS is on delta and you have to do a full load on it depending on some certain selection criteria.

Say you have ODS1 which is on delta. Now obviously a full request to this will not allow you to activate since its on delta already. Now you find that say example for a particular month, the data has gone bad. You have the option of deleting this via selective delete.

Now, you have to reload the data for this month via same selections. Now you mark this request as repair full. You can mark it repair full in the option Scheduler -> Repaire full and tick the option.

Repair full request is nothing but a Full load, the only difference is that it is marked as "Repair", I hope you know that an ODS does not allow Full loads if Deltas are activated. So to fix some issue in an ODS, we might need to do a Full load, which is done by a FULL REPAIR request. If you do a Full Load into an ODS(Delta enabled) the activation will fail.

To mark a Full Load as a Repair Full Request :

In the Infopackage goto Menu Scheduler -> Repair FUll request -> Tick, this option will be enabled only if the Infopackage has been set to Full Load in the update tab.

Delta Update for Generic Data Sources

If you are are using BW 2.0b, and you need to create generic data sources but the problem is you cannot update the infocube with delta update only can be done with full update.

If the source table has a time stamp (last modified), you can easily implement a pseudo-delta datasource with ABAP Query that selects all entries where stamp > time_of_last_extraction and writes the current time into a customer table so that it can be used as time of last extraction in the next load.

Genuine delta enabled datasources for transactional data require some hacking.

You can write the data to a RFC queue named BW<client><datasource name> and hack table ROOSOURCE.

In a CRM system you will find generic function modules that are able to extract data from these queues, generate extraction programs, and so forth. Look out for SMOX* function modules.

If you can't do that because you're working with a different release and/or application, look at entries in ROOSOURCE with extraction method F1, datasource type TRAN, and delta type AIM*. The field "Extractor"

Page 82: BW - Prepared

82 | P a g e B W I V I E W

contains the function module used for extraction. Look at the way the standard datasource works and copy everything you need.

What is the different delta updates and when to use which updates?

Delta loads will bring any new or changed records after the last upload. This method is used for better loading in less time. Most of the std SAP datasources come as delta enabled, but some are not. In this case you can do a full load to the ODS and then do a delta from the ODS to the cube. If you create generic datasources, then you have the option of creating a delta on Calday, timestamp or numeric pointer fields (this can be doc number, etc).

You'll be able to see the delta changes coming in the delta queue through RSA7 on the R3 side.

To do a delta, you first have to initialize the delta on the BW side and then set up the delta. The delta mechanism is the same for both Master data and Transaction data loads.

There are three deltas: Direct Delta: With this update mode, the extraction data is transferred with each document posting directly into the BW delta queue. In doing so, each document posting with delta extraction is posted for exactly one LUW in the respective BW delta queues.

Queued Delta: With this update mode, the extraction data is collected for the affected application instead of being collected in an extraction queue, and can be transferred as usual with the V3 update by means of an updating collective run into the BW delta queue. In doing so, up to 10000 delta extractions of documents for an LUW are compressed for each DataSource into the BW delta queue, depending on the application.

Non-serialized V3 Update:With this update mode, the extraction data for the application considered is written as before into the update tables with the help of a V3 update module. They are kept there as long as the data is selected through an updating collective run and are processed. However, in contrast to the current default settings (serialized V3 update), the data in the updating collective run are thereby read without regard to sequence from the update tables and are transferred to the BW delta queue.

About The Use Of Setup Tables

In lo extraction we delete setup tables for the first time (init and full update). The data is extracted from the setup table without intervening the application database.

My question is 1. After the full load or init load will the setup table is used? if how? 2. For direct update and queued delta setup table is used?

Set up tables is to store and pull the historic data in BW.

Once you do an init and stsrt filling the set up tables the delta queue is created in RSA7 which stores the delta. So once your set up tables are finished you do full repair or full load from the data ousrce to target and then you run the delta's.

Just remeber that set up tables are just the storage of data of the records before records stsrts moving into the delta queue. The practice is to first do init W/0 data transfer in the data source and then start filling the set up table. As you will you do the init w/0 data transfer you will have all the new records pulled into the delta queue and after that you start filling the set up tables.

This can bring the new records which are also going to the delta queue but after the set up tables are filled the full loads to the target will be followed by delta loads which will overwrite with the new records.

Page 83: BW - Prepared

83 | P a g e B W I V I E W

Delta init and full upload's data comes from setuptable.

The system starts capturing all newly created records and modifications in either LBWQ/Sm13/Rsa7 according to update methods(Queue delta,unserialized delta,direct delta) . This collection starts once we activate the extraction structure. So these records will become as Delta loads. So there is chance of having the records in these queues which were records before filling setup table. And also these records comes into setup table if we do filling setuptable . So thats why we need to clear these queues before we start the filling setup table.

ROOSOURCE: Table Header for SAP BW OLTP Sources RODELTAM: BW Delta Process ROOSFIELD: DataSource Fields ROOSGEN: Generated Objects for OLTP Source, Last changed date and who

characteristic. a. True b. False ANSWER(S): A If an info object is created as a characteristic with a reference characteristic, it won't have its own sid and master data tables. The info object will always use the tables of the referred characteristic. 3. The following statements are not true about navigational attributes. a. An attribute of an info object cannot be made navigational if the attribute-only flag on the attribute info object has been checked. b. Navigational attributes can be used to create aggregates. c. It is possible to make a display attribute to navigational in an infocube data without deleting all the data from the info cube. d. Once an attribute is made navigational in an info cube, it is possible to change it back to a display attribute if the data has been deleted from the info cube. ANSWER(S): D All the statements except D are true. It is possible to change a navigational attribute back to a display attribute in an info cube, without deleting all data from the info cube. 4. True or False? It is possible to create a key figure without assigning currency or unit. a. True b. False ANSWER(S): A Yes, it is possible to create a key figure without assigning a unit if the data type is one of these four: Number, Integer, Date or Time. 5. The following statements are true for compounded info objects. a. An info cube needs to contain all info objects of the compounded info object if it has been included in the info cube. b. An info object cannot be included as a compounding object if it is defined as an attribute only. c. An info object can be included as an attribute and a compounding object simultaneously. d. The total length of a compounded info object cannot exceed 60. ANSWER(S): A, B, D When a compounded info object is included in an info cube, all corresponding info objects are added to the info cube. If an info object is defined as an attribute, it cannot be included as compounding object. The total length of the compounding info objects cannot exceed 60 characters. 6. The following statements are true for an info cube. a. Each characteristic of info cube should be assigned to at least one dimension. b. One characteristic can be assigned to more than one dimensions. c. One dimension can have more than one characteristic.

Page 84: BW - Prepared

84 | P a g e B W I V I E W

d. More than one characteristic can be assigned to one line item dimension. ANSWER(S): A, C Any characteristic in the info cube should be assigned to a dimension. One characteristic cannot be assigned to more than one dimension. One dimension can have more than one characteristic, provided it is not defined as a line item dimension. 7. The following statements are true for info cubes and aggregates. a. Requests cannot be deleted if info cubes are compressed. b. A request cannot be deleted from an info cube if that request is compressed in the aggregates. c. Deleting a request from the cube will delete the corresponding request from the aggregate, if the aggregate has not been compressed. d. All of the above. ANSWER(S): A, C Once the info cubes are compressed it is not possible to delete data based on the requests. There won't be request IDs anymore. Requests can be deleted even if the requests in aggregates have been compressed. But the aggregates will have to be deactivated. Deleting an uncompressed request from an info cube will automatically delete the corresponding request from aggregate if the aggregate request has not been compressed. 8. The following statements are true regarding the ODS request deletion. a. It is not possible to delete a request from ODS after the request has been activated. b. Deleting an (inactive) request will delete all requests that have been loaded into the ODS after this request was loaded. c. Deleting an active request will delete the request from the change log table. d. None of the above. ANSWER(S): C It is possible to delete requests from an ODS, even if the request has been activated. The "before and after image" of the data will be stored in the change log table using which the request will be deleted. Deleting a request which has not been activated in ODS will not delete the requests which are loaded after this request. But if the request has been activated then the loaded and activated requests later will get deleted. Also the change log entries will be deleted for that request. 9. The following statements are true for aggregates. a. An aggregate stores data of an info cube redundantly and persistently in a summarized form in the database. b. An aggregate can be built on characteristics or navigational attributes from the info cube. c. Aggregates enable queries to access data quickly for reporting. d. None of the above. ANSWER(S): A, B, C Aggregates summarize and store data from an info cube. Characteristics and navigational attributes of an info cube can be used to create aggregates. Since aggregates contain summarized data, the amount of data in aggregates will be much less that the cube which makes the queries to run faster when they access aggregates. 10. True or False? If an info cube has active aggregates built on it, the new requests loaded will not be available for reporting until the rollup has been completed successfully. a. True b. False ANSWER(S): A Newly-loaded requests in an info cube with aggregates will not be available for reporting until the aggregate rollup has been completed successfully. This is to make sure that the cube and aggregates are consistent while reporting. 11. What is the primary purpose of having multi-dimensional data models? a. To deliver structured information that the business user can easily navigate by using

Page 85: BW - Prepared

85 | P a g e B W I V I E W

any possible combination of business terms to show the KPIs. b. To make it easier for developers to build applications, that will be helpful for the business users. c. To make it easier to store data in the database and avoid redundancy. d. All of the above. ANSWER(S): A The primary purpose of multi-dimensional modeling is to present the business users in a way that corresponds their normal understanding of their business. They also provide a basis for easy access of data which is OLAP engine. 12. The following statements are true for partitioning. a. If a cube has been partitioned, the E table of the info cube will be partitioned on time. b. The F table of the info cube is partitioned on request. c. The PSA tableis partitioned automatically with several requests on one partition. d. It is not possible to partition the info cube after data has been loaded, unless all the data is deleted from the cube. ANSWER(S): A, B, C, D, F BW allows partitioning of the info cubes based on time. If the info cube is partitioned, the e-fact table of the info cube will be partitioned on the time characteristic selected. The F fact table is partitioned on request ids automatically during the loads. PSA tables are also partitioned during the loads and can accommodate more than one request. For an info cube to be partitioned, all data needs to be removed from the info cube. 13. The following statements are true for OLAP CACHE. a. Query navigation states and query results are stored in the application server memory. b. If the same query has been executed by another user the result sets can be used if the global cache is active. c. Reading query results from OLAP Cache is faster than reading from the database. d. Changing the query will invalidate the OLAP Cache for that query. ANSWER(S): A, B, C, D Query results are stored in the memory of application server, which can be retrieved later by another user running the same query. This will make the query faster since the results are already calculated and stored in the memory. By changing the query, the OLAP Cache gets invalidated. 14. The following statements are true about the communication structure. a. It contains all the info objects that belong to an info source. b. All the data is updated into the info cube with this structure. c. It is dependent on the source system. d. All of the above. ANSWER(S): A, B The communication structure contains all info objects in the info source and it is used to update the info cube by temporarily storing the data that needs to be updates to the data target. It doesn't depend on the source system. 15. The following statements are untrue about ODSs. a. It is possible to create ODSs without any data fields. b. An ODS can have a maximum of 16 key fields. c. Characteristics and key figures can be added as key fields in an ODS. d. After creating and activating, an export data source is created automatically. ANSWER(S): B, D An ODS cannot be created without any data fields, and it can have a maximum of only 16 key fields. Key figures cannot be included as a key field in an ODS. The export data source is created after an ODS has been created and activated. BW Quiz – Level 2: Intermediate 1. Identify the statement(s) that is/are true. A change run... a. Activates the new Master data and Hierarchy data b. Aggregates are realigned and recalculated c. Always reads data from the InfoCube to realign aggregates d. Aggregates are not affected by change run

Page 86: BW - Prepared

86 | P a g e B W I V I E W

ANSWER(S): A, B Change run activates the Master data and Hierarchy data changes. Before the activation of these changes, all the aggregates that are affected by these changes are realigned. Realignment is not necessarily done by reading InfoCubes. If these are part of another aggregate that can be used to read data for the realignment, change run uses that aggregate. 2. Which statement(s) is/are true about Multiproviders? a. This is a virtual Infoprovider that does not store data b. They can contain InfoCubes, ODSs, info objects and info sets c. More than one info provider is required to build a Multiprovider d It is similar to joining the data tables ANSWER(S): A, B Multiproviders are like virtual Infoproviders that do not store any data. Basic InfoCubes, ODSs, info sets or Info objects can be used to build a Multiprovider. Multiproviders can even be built on a single Infoprovider. 3. The structure of the PSA table created for an info source will be... a. Featuring the exact same structure as Transfer Structure b. Similar to the transfer rules c. Similarly structured as the Communication Structure d. The same as Transfer structure, plus four more fields in the beginning ANSWER(S): D The structure of PSA tables will have an initial four fields: request id, packet number, partition value and record number. The remaining fields will be exactly like Transfer Structure. 4. In BW, special characters are not permitted unless it has been defined using this transaction: a. rrmx b. rskc c. rsa15 d. rrbs ANSWER(S): B Rskc is the transacation used to enter the permitted characters in BW. 5. Select the true statement(s) about info sources: a. One info source can have more than one source systems assigned to it b. One info source can have more than one data source assigned to it provided the data sources are in different source systems c. Communication structures is a part of an info source d. None of the above ANSWER(S): A, C Info sources can be assigned to multiple source systems. Also, info sources can have multiple data sources within the same source system. Communication structure is a part of the source system. 6. Select the statement(s) that is/are true about the data sources in a BW system: a. If the hide field indicator is set in a data source, this field will not be transferred to BW even after replicating the data source b. A field in a data source won't be usable unless the selection field indicator has been set in the data source c. A field in an info package will not be visible for filtering unless the selection field has been checked in the data source d. All of the above ANSWER(S): A, C If the hide field is checked in a data source, that field will not be transferred to a BW system from the source system even after replication. If the selection field is not checked, that field won't be available for filtering the info package. 7. Select the statement(s) which is/are true about the 'Control parameters for data transfer from the Source System'

Page 87: BW - Prepared

87 | P a g e B W I V I E W

a. The table used to store the control parameters is ROIDOCPRMS b. Field max lines is the maximum number of records in a packet c. Max Size is the maximum number of records that can be transferred to BW d. All of the above. ANSWER(S): A ROIDOCPRMS is the table in the BW source system that is used to store the parameters for transferring data to BW. Max size is the size in KB which is used to calculate the number of records in each packet. Max lines is the maximum number of records in each packet. 8. The indicator 'Do not condense requests into one request when activation takes place' during ODS activation applies to condensation of multiple requests into one request to store it in the active table of the ODS. a. True b. False ANSWER(S): B This indicator is used to make sure that the change log data is not compressed to one request when activating multiple requests at the same time. If these requests are combined to one request in change log table, individual deletion will not be possible. 9. Select the statement(s) which is/are not true related to flat file uploads: a. CSV and ASCII files can be uploaded b. The table used to store the flat file load parameters is RSADMINC c. The transaction for setting parameters for flat file upload is RSCUSTV7 d. None of the above. ANSWER(S): C Transaction for setting flat file upload parameters is RSCUSTV6. 10. Which statement(s) is/are true related to Navigational attributes vs Dimensional attributes? a. Dimensional attributes have a performance advantage over navigational attributes for queries b. Change history will be available if an attribute is defined as navigational c. History of changes is available if an attribute is included as a characteristic in the cube d. All of the above ANSWER(S): A, C Dimensional attributes have a performance advantage while running queries since the number of table joins will be less compared to navigational attributes. For navigational attributes, the history of changes will not be available. But for dimensional attributes, the InfoCube will have the change history. 11. When a Dimension is created as a line item dimension in a cube, Dimensions IDs will be same as that of SIDs. a. True b. False ANSWER(S): A When a Dimension is created as a line item dimension, the SIDs of the characteristic is directly stored in the fact tables and these are used as Dimension IDs. Dimension table will be a view off of SID table and fact table. 12. Select the true statement(s) related to the start routine in the update rules: a. All records in the data packet can be accessed b. Variables declared in the global area is available for individual routines c. Returncode greater than 0 will be abort the whole packet d. None of the above ANSWER(S): A, B, C In the start routine, all records are available for processing. Variables declared in the global area can be used in individual routines. Returncode greater than 0 will abort processing of all records in the packet. 13. If a characteristic value has been entered in InfoCube-specific properties of an InfoCube, only these values can be loaded to the cube for that characteristic.

Page 88: BW - Prepared

88 | P a g e B W I V I E W

a. True b. False ANSWER(S): A If a constant is entered in the InfoCube-specific properties, only that value will be allowed in the InfoCube for that characteristic. This value will be fixed in the update rules and it is not possible to do the change in update rules for that characteristic. 14. After any changes have been done to an info set it needs to be adjusted using transaction RSISET. a. True b. False ANSWER(S): A After makeing any type of change to an info set, it needs to be adjusted using the transaction RSISET. 15. Select the true statement(s) about read modes in BW: a. Read mode determines how the OLAP processor retrieves data during query execution and navigation b. Three different types of read modes are available c. Can be set only at individual query level d. None of the above ANSWER(S): A, B Read mode determines how an OLAP processor retrieves data during query execution and navigation. Three types of read modes are available: 1. Read data during expand hierarchies 2. Read data during navigation 3. Read data all at once Read mode can be set at info provider level and query level. BW Quiz – Level 3: Advanced 1. Select the correct statements about the steps executed by a change run. a. The steps activate the new master data and hierarchy data changes. b. All aggregates are realigned and recalculated. c. Aggregates containing navigational attributes are realigned and recalculated for the master data changes. d. The steps delete the 'A' (active) records for which the 'M' (modified) records exist from master data tables, and makes all modified records active. e. All of the above. ANSWERS: A, C, D Master data and hierarchy data changes are activated and all the aggregates which have navigational attributes which will be affected by the changes are realigned. Change run deletes all active records for which modified records exists in master data 'P' table and makes all modified records to active. 2. Key figures that are set for exception aggregation MIN or MAX in an aggregate cause the aggregates to be completely rebuilt for each change run alignment. a. True b. False ANSWER: A If an aggregate contains key figures which are built as MIN or MAX, that will force a change run to recreate these aggregates during the alignment process. 3. If special characters are not defined in transaction RSKC in BW then: a. These characters cannot be loaded into BW at all. b. These characters can only be loaded into text fields. c. These characters can be loaded into attributes and texts. d. BW won't be able to generate the SIDs for these characters because all the fields where SIDs are generated cannot be loaded. e. None of the above. ANSWERS: B, D Unless specified using transaction RSKC, special characters cannot be loaded into BW

Page 89: BW - Prepared

89 | P a g e B W I V I E W

fields where it needs to generte SIDs. So it is possible for these characters to be loaded into text fields. 4. A change run updates the 'E' table of the aggregates while doing the alignment for changes in the master data. a. True b. False ANSWER: B A change run doesn't update the 'E' table of the aggregates for the alignment of aggregates due to master data changes. The alignment is done by inserting rows with the necessary negative and positive key figure values in the 'F' table. 5. Select the correct statements related to the control parameters for a data transfer in table ROIDOCPRMS in the BW source system. a. The field MAXSIZE is the maximum number of records which can be transferred to BW in a single packet. b. The field MAXSIZE is the size in KB which is used to calculate the number of records per data packet. c. MAXLINES is the maximum number of records which can be transferred to BW per data load. d. If the number of data records per packet exceeds MAXLINES value the extraction process fails. ANSWER: B In the table ROIDOCPRMS, MAXSIZE is the size in kilobytes which is used to calculate the number of records in each datapacket to be transferred to BW. If the calculated number of records exceeds MAXLINES, the packet size in terms of number of records will be made the same as the value of MAXLINES value. 6. Identify the differences between an Infoset and a Multiprovider. a. Both Multiproviders and Infosets can contain all the info providers in BW. b. Queries built on Multiproviders use 'union' and queries on Infosets use 'join' to retrieve data from different info providers. c. Both Multiproviders and Infosets do not have data, but data is accessed from the basic info providers used in these objects. d. None of the above. ANSWERS: B, C A Multiprovider can be built on basic InfoCubes, ODSs, info objects or Infosets. An Infoset can have only ODSs or info objects. Multiproviders use 'union' operation but Infosets use 'join'. Both the objects are logical definitions which don't store any data. 7. Select the correct statements about the OLAP Cache Monitor in BW. a. The transaction for the OLAP Cache Monitor is RSRCACHE. b. If the persistent mode is inactive than the cache is inactive and query results will not be cached in memory. c. A 'read flag' is set in the Cache Monitor when data is read from the cache. d. When new data is loaded into the info provider which the query is built on, the cache for that query is invalidated. e. All of the above. ANSWERS: A, C, D Transaction RSRCACHE is for the Cache Monitor. The OLAP Cache will be active unless the 'cache inactive' flag is set. Persistent mode is to specify the action to be taken when the cache memory is exhausted. 8. Select the correct statements about ODS settings. a. Performance of the ODS activation improves when the BEx reporting flag is switched to off. b. Overwriting a data record is not allowed if the 'unique' data record flag is set. c. Data targets are updated from the ODS regardless of the ODS activation status. d. All of the above. ANSWERS: A, B If the BEx Reporting flag is switched off than the SID won't have to be taken when

Page 90: BW - Prepared

90 | P a g e B W I V I E W

activating the ODS. This improves performance. If the 'unique data records' flag is set, it is not possible to load records to the ODS for which the key combination already exists. Data targets can be updated only after the ODS activation takes place successfully. 9. It is not possible to activate an ODS which contains a request from a full load and a Delta Initialization load of the same data source. a. True b. False ANSWER: B It is possible to activate an ODS which contains Delta and full loads from the same InfoSource if the full load is done with the repair flag set in the InfoPackage. 10. Select the correct statements regarding data deletion settings in an InfoPackage. a. It is possible to set an InfoPackage to delete all the data in an InfoCube during the loads. b. Only uncompressed data can be set to be deleted from the cube in an InfoPackage during the loads. c. Deletion settings can be done only for basic InfoCubes. d. Data deletion settings in an InfoPackage are possible only for full loads. e. All of the above. ANSWERS: A, C All data can be set to be deleted from a basic cube during full loads and Delta loads. Data can be deleted depending on various conditions. Data can only be deleted from basic cubes, not the ODS. 11. Select the correct statements about parallel processing in Multiproviders. a. Multiprovider queries create one process per info provider involved and are processed parallel by default. b. It is not possible to make Multiprovider queries run sequential. c. Multiprovider queries create a parent process which provides a synchronization point to collect the overall result from other sub processes. d. Parallel processing is always faster than sequential processing in Multiproviders. e. All of the above. ANSWERS: A, C Multiprovider queries create processes to run on the individual info providers involved. These processes run parallel by default. The parent process provides a synchronization point to collect overall results from sub processes. Many times the parallel processing may be slower than sequential processing, if the data volume is high. 12. Select the correct statements about the ALPHA conversion routine in BW. a. An ALPHA conversion routine is assigned to a characteristic info object automatically when it is created. b. An ALPHA conversion routine is used to convert characteristic values from 'external to internal' values only. c. Conversion is done on alphabets and numeric input values. d. An ALPHA conversion routine removes the spaces on the right side of numeric values and right aligns them. e. The left side of the numeric input values are filled with zeros. ANSWERS: A, D, E The ALPHA conversion routine is assigned to the characteristic info objects when created. This needs to be deleted manually if not required. The conversion is applied to 'external to internal' formats and vice versa. For 'external to internal' formats, the values are right aligned and the spaces on left side is padded with zeros. 13. Select the correct statements related to navigational attributes. a. It is better to avoid using navigational attributes from a query performance point of view. b.If a navigational attribute is used in an aggregate, the aggregate needs to be adjusted every time there is a change in the values of this attribute. c. An attribute included as a characteristic in the InfoCube has the same effect as being used as a navigational attribute in the cube.

Page 91: BW - Prepared

91 | P a g e B W I V I E W

d. A navigational attribute can be made to display an attribute without removing data from the InfoCube. e. None of the above. ANSWERS: A, B, D Queries using navigational attributes will be slower since additional tables have to be joined with the Fact tables to get the desired results. If there are changes to the values of navigational attributes, the aggregates using these will have to be readjusted, which is done by a change run. Navigational attributes can be made to display only without removing the data from the cube. 14. A 'Check for Referential Integrity' can only be possible for information sources with flexible updating. a. True b. False ANSWER: A A referential integrity check is only possible for info sources with flexible updating. 15. Select the correct statements about physical partitioning in BW. a. New partitions on F table of an InfoCube are created during data loads to the InfoCube. b. An E fact table is created when activating an InfoCube with a number of partitions corresponding to the partition value range. c. If a cube is not partitioned before populating with data, it is not possible to partition the cube without removing all the data. d. PSA table partitions can contain more than one request. e. All of the above. ANSWER: E The 'F' Fact table is partitioned on request and is created during data loads. The 'E' Fact table of the InfoCube is created with the number of partitions specified in the partitioning range. Once the data is loaded to the cube, partitioning based in the Time characteristic is not possible without removing the data. PSA table partitions can contain more than one request.

Question: 1. What kind of tools are available to monitor the overall Query Performance? Answers: o BW Statistics o BW Workload Analysis in ST03N (Use Export Mode!) o Content of Table RSDDSTAT Question: 2. Do I have to do something to enable such tools? Answer: o Yes, you need to turn on the BW Statistics: RSA1, choose Tools -> BW statistics for InfoCubes (Choose OLAP and WHM for your relevant Cubes) Question: 3. What kind of tools are available to analyse a specific query in detail? Answers: o Transaction RSRT o Transaction RSRTRACE Question: 4. Do I have a overall query performance problem? Answers: o Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all InfoCubes. o You need to run ST03N in expert mode to get these values Question:

Page 92: BW - Prepared

92 | P a g e B W I V I E W

5. What can I do if the database proportion is high for all queries? Answers: Check: o If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables) o If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch) o If Buffers, I/O, CPU, memory on the database server are exhausted? o If Cube compression is used regularly o If Database partitioning is used (not available on all DB platforms) Question: 6. What can I do if the OLAP proportion is high for all queries? Answers: Check: o If the CPUs on the application server are exhausted o If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks) o If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default) Question: 7. What can I do if the client proportion is high for all queries? Answer: o Check whether most of your clients are connected via a WAN Connection and the amount of data which is transferred is rather high. Question: 8. Where can I get specific runtime information for one query? Answers: o Again you can use ST03N -> BW System Load o Depending on the time frame you select, you get historical data or current data. o To get to a specific query you need to drill down using the InfoCube name o Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats) Question: 9. What kind of query performance problems can I recognize using ST03N values for a specific query? Answers: (Use Details to get the runtime segments) o High Database Runtime o High OLAP Runtime o High Frontend Runtime Question: 10. What can I do if a query has a high database runtime? Answers: o Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate) o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes) o Check if the read mode of the query is unfavourable - Recommended (H) Question: 11. What can I do if a query has a high OLAP runtime?

Page 93: BW - Prepared

93 | P a g e B W I V I E W

Answers: o Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells") o Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred. o Check if a user exit Usage is involved in the OLAP runtime? o Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used. - Check if a proper index on the inclusion table exist Question: 12. What can I do if a query has a high frontend runtime? Answers: o Check if a very high number of cells and formattings are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime. o Check if frontend PC are within the recommendation (RAM, CPU Mhz)

o Check if the bandwidth for WAN connection is sufficient

Page 94: BW - Prepared

94 | P a g e B W I V I E W

Metadata Search (Developer Functionality) :

1. It is possible to search BI metadata (such as InfoCubes, InfoObjects, queries, Web templates)

using the TREX search engine. This search is integrated into the Metadata Repository, the Data

Warehousing Workbench and to some degree into the object editors. With the simple search, a

search for one or all object types is performed in technical names and in text.

2. During the text search, lower and uppercase are ignored and the object will also be found when

the case in the text is different from that in the search term. With the advanced search, you can

also search in attributes. These attributes are specific to every object type. Beyond that, it can be

restricted for all object types according to the person who last changed it and according to the

time of the change.

3. For example, you can search in all queries that were changed in the last month and that include

both the term "overview" in the text and the characteristic customer in the definition. Further

functions include searching in the delivered (A) version, fuzzy search and the option of linking

search terms with “AND” and “OR”.

4. "Because the advanced search described above offers more extensive options for search in

metadata, the function ""Generation of Documents for Metadata"" in the administration of

document management (transaction RSODADMIN) was deleted. You have to schedule (delta)

indexing of metadata as a regular job (transaction RSODADMIN).

Effects on Customizing

Installation of TREX search engine

Creation of an RFC destination for the TREX search engine

Entering the RFC destination into table RSODADMIN_INT

Determining relevant object types

Initial indexing of metadata"

Remote Activation of DataSources (Developer Functionality) :

1. When activating Business Content in BI, you can activate DataSources remotely from the BI

system. This activation is subject to an authorization check. You need role SAP_RO_BCTRA.

Authorization object S_RO_BCTRA is checked. The authorization is valid for all DataSources of

a source system. When the objects are collected, the system checks the authorizations remotely,

and issues a warning if you lack authorization to activate the DataSources.

2. In BI, if you trigger the transfer of the Business Content in the active version, the results of the

authorization check are based on the cache. If you lack the necessary authorization for

activation, the system issues a warning for the DataSources. BW issues an error for the

corresponding source-system-dependent objects (transformations, transfer rules, transfer

structure, InfoPackage, process chain, process variant). In this case, you can use Customizing

for the extractors to manually transfer the required DataSources in the source system from the

Business Content, replicate them in the BI system, and then transfer the corresponding source-

system-dependent objects from the Business Content. If you have the necessary authorizations

for activation, the DataSources in the source system are transferred to the active version and

replicated in the BI system. The source-system-dependent objects are activated in the BI

system.

Page 95: BW - Prepared

95 | P a g e B W I V I E W

3. Source systems and/or BI systems have to have BI Service API SAP NetWeaver 2004s at

least; otherwise remote activation is not supported. In this case, you have to activate the

DataSources in the source system manually and then replicate them to the BI system.

Copy Process Chains (Developer Functionality):

You find this function in the Process Chain menu and use it to copy the process chain you have selected,

along with its references to process variants, and save it under a new name and description.

InfoObjects in Hierarchies (Data Modeling):

1. Up to Release SAP NetWeaver 2004s, it was not possible to use InfoObjects with a length

longer than 32 characters in hierarchies. These types of InfoObjects could not be used as a

hierarchy basic characteristic and it was not possible to copy characteristic values for such

InfoObjects as foreign characteristic nodes into existing hierarchies. From SAP NetWeaver

2004s, characteristics of any length can be used for hierarchies.

2. To load hierarchies, the PSA transfer method has to be selected (which is always

recommended for loading data anyway). With the IDOC transfer method, it continues to be the

case that only hierarchies can be loaded that contain characteristic values with a length of less

than or equal to 32 characters.

Parallelized Deletion of Requests in DataStore Objects (Data Management) :

Now you can delete active requests in a DataStore object in parallel. Up to now, the requests were

deleted serially within an LUW. This can now be processed by package and in parallel.

Object-Specific Setting of the Runtime Parameters of DataStore Objects (Data Management):

Now you can set the runtime parameters of DataStore objects by object and then transport them into

connected systems. The following parameters can be maintained:

· Package size for activation

· Package size for SID determination

· Maximum wait time before a process is designated lost

· Type of processing: Serial, Parallel(batch), Parallel (dialog)

· Number of processes to be used

· Server/server group to be used

Enhanced Monitor for Request Processing in DataStore Objects (Data Management):

1. For the request operations executed on DataStore objects (activation, rollback and so on),

there is now a separate, detailed monitor. In previous releases, request-changing operations are

displayed in the extraction monitor. When the same operations are executed multiple times, it will

be very difficult to assign the messages to the respective operations.

2. In order to guarantee a more simple error analysis and optimization potential during

configuration of runtime parameters, as of release SAP NetWeaver 2004s, all messages relevant

for DataStore objects are displayed in their own monitor.

Page 96: BW - Prepared

96 | P a g e B W I V I E W

Write-Optimized DataStore Object (Data Management):

1. Up to now it was necessary to activate the data loaded into a DataStore object to make it

visible to reporting or to be able to update it to further InfoProviders. As of SAP NetWeaver

2004s, a new type of DataStore object is introduced: the write-optimized DataStore object.

2. The objective of the new object type is to save data as efficiently as possible in order to be able

to further process it as quickly as possible without addition effort for generating SIDs,

aggregation and data-record based delta. Data that is loaded into write-optimized DataStore

objects is available immediately for further processing. The activation step that has been

necessary up to now is no longer required.

3. The loaded data is not aggregated. If two data records with the same logical key are extracted

from the source, both records are saved in the DataStore object. During loading, for reasons of

efficiency, no SID values can be determined for the loaded characteristics. The data is still

available for reporting. However, in comparison to standard DataStore objects, you can expect to

lose performance because the necessary SID values have to be determined during query

runtime.

Deleting from the Change Log (Data Management):

The Deletion of Requests from the Change Log process type supports the deletion of change log files.

You select DataStore objects to determine the selection of requests. The system supports multiple

selections. You select objects in a dialog box for this purpose. The process type supports the deletion of

requests from any number of change logs.

Using InfoCubes in InfoSets (Data Modeling):

1. You can now include InfoCubes in an InfoSet and use them in a join. InfoCubes are handled

logically in InfoSets like DataStore objects. This is also true for time dependencies. In an

InfoCube, data that is valid for different dates can be read.

2. For performance reasons you cannot define an InfoCube as the right operand of a left outer

join. SAP does not generally support more than two InfoCubes in an InfoSet.

Pseudo Time Dependency of DataStore Objects and InfoCubes in InfoSets (Data Modeling) :

In BI only master data can be defined as a time-dependent data source. Two additional fields/attributes

are added to the characteristic. DataStore objects and InfoCubes that are being used as InfoProviders in

the InfoSet cannot be defined as time dependent. As of SAP NetWeaver 2004s, you can specify a date or

use a time characteristic with DataStore objects and InfoCubes to describe the validity of a record.

These InfoProviders are then interpreted as time-dependent data sources.

Left Outer: Include Filter Value in On-Condition (Data Modeling) :

Page 97: BW - Prepared

97 | P a g e B W I V I E W

1. The global properties in InfoSet maintenance have been enhanced by one setting Left Outer:

Include Filter Value in On-Condition. This indicator is used to control how a condition on a field of

a left-outer table is converted in the SQL statement. This affects the query results:

If the indicator is set, the condition/restriction is included in the on-condition in the SQL

statement. In this case the condition is evaluated before the join.

If the indicator is not set, the condition/restriction is included in the where-condition. In

this case the condition is only evaluated after the join.

The indicator is not set by default.

Key Date Derivation from Time Characteristics (Data Modeling) :

Key dates can be derived from the time characteristics 0CALWEEK, 0CALMONTH, 0CALQUARTER,

0CALYEAR, 0FISCPER, 0FISCYEAR: It was previously possible to specify the first, last or a fixed offset

for key date derivation. As of SAP NetWeaver 2004s, you can also use a key date derivation type to

define the key date.

Repartitioning of InfoCubes and DataStore Objects (Data Management):

With SAP NetWeaver 2004s, the repartitioning of InfoCubes and DataStore objects on the database that

are already filled is supported. With partitioning, the runtime for reading and modifying access to

InfoCubes and DataStore objects can be decreased. Using repartitioning, non-partitioned InfoCubes and

DataStore objects can be partitioned or the partitioning schema for already partitioned InfoCubes and

DataStore objects can be adapted.

Remodeling InfoProviders (Data Modeling):

2. As of SAP NetWeaver 2004s, you can change the structure of InfoCubes into which you have

already loaded data, without losing the data. You have the following remodeling options:

3. For characteristics:

Inserting, or replacing characteristics with: Constants, Attribute of an InfoObject within the

same dimension, Value of another InfoObject within the same dimension, Customer exit

(for user-specific coding).

Delete

4. For key figures:

Inserting: Constants, Customer exit (for user-specific coding).

Replacing key figures with: Customer exit (for user-specific coding).

Delete

5. SAP NetWeaver 2004s does not support the remodeling of InfoObjects or DataStore objects. This

is planned for future releases. Before you start remodeling, make sure:

(A) You have stopped any process chains that run periodically and affect the

corresponding InfoProvider. Do not restart these process chains

until remodeling is finished.

(B) There is enough available tablespace on the database.

6. After remodeling, check which BI objects that are connected to the InfoProvider (transformation

rules, MultiProviders, queries and so on) have been deactivated. You have to reactivate these

objects manually

Page 98: BW - Prepared

98 | P a g e B W I V I E W

Parallel Processing for Aggregates (Performance):

1. The change run, rollup, condensing and checking up multiple aggregates can be executed in

parallel. Parallelization takes place using the aggregates. The parallel processes are continually

executed in the background, even when the main process is executed in the dialog.

2. This can considerably decrease execution time for these processes. You can determine the

degree of parallelization and determine the server on which the processes are to run and with

which priority.

3. If no setting is made, a maximum of three processes are executed in parallel. This setting can

be adjusted for a single process (change run, rollup, condensing of aggregates and checks).

Together with process chains, the affected setting can be overridden for every one of the

processes listed above. Parallelization of the change run according to SAP Note 534630 is

obsolete and is no longer being supported.

Multiple Change Runs (Performance):

1. You can start multiple change runs simultaneously. The prerequisite for this is that the lists of

the master data and hierarchies to be activated are different and that the changes affect different

InfoCubes. After a change run, all affected aggregates are condensed automatically.

2. If a change run terminates, the same change run must be started again. You have to start the

change run with the same parameterization (same list of characteristics and hierarchies). SAP

Note 583202 is obsolete.

Partitioning Optional for Aggregates (Performance):

1. Up to now, the aggregate fact tables were partitioned if the associated InfoCube was

partitioned and the partitioning characteristic was in the aggregate. Now it is possible to

suppress partitioning for individual aggregates. If aggregates do not contain much data, very

small partitions can result. This affects read performance. Aggregates with very little data should

not be partitioned.

2. Aggregates that are not to be partitioned have to be activated and filled again after the

associated property has been set.

MOLAP Store (Deleted) (Performance):

Previously you were able to create aggregates either on the basis of a ROLAP store or on the basis of a

MOLAP store. The MOLAP store was a platform-specific means of optimizing query performance. It used

Microsoft Analysis Services and, for this reason, it was only available for a Microsoft SQL server database

platform. Because HPA indexes, available with SAP NetWeaver 2004s, are a platform-independent

alternative to ROLAP aggregates with high performance and low administrative costs, the MOLAP store

is no longer being supported.

Data Transformation (Data Management):

Page 99: BW - Prepared

99 | P a g e B W I V I E W

1. A transformation has a graphic user interfaces and replaces the transfer rules and update rules

with the functionality of the data transfer process (DTP). Transformations are generally used to

transform an input format into an output format. A transformation consists of rules. A rule defines

how the data content of a target field is determined. Various types of rule are available to the

user such as direct transfer, currency translation, unit of measure conversion, routine, read from

master data.

2. Block transformations can be realized using different data package-based rule types such as

start routine, for example. If the output format has key fields, the defined aggregation behavior is

taken into account when the transformation is performed in the output format. Using a

transformation, every (data) source can be converted into the format of the target by using an

individual transformation (one-step procedure). An InfoSource is only required for complex

transformations (multistep procedures) that cannot be performed in a one-step procedure.

3. The following functional limitations currently apply:

You cannot use hierarchies as the source or target of a transformation.

You can not use master data as the source of a transformation.

You cannot use a template to create a transformation.

No documentation has been created in the metadata repository yet for transformations.

In the transformation there is no check for referential integrity, the InfoObject transfer

routines are not considered and routines cannot be created using the return table.

Quantity Conversion : As of SAP NetWeaver 2004s you can create quantity conversion types using transaction RSUOM. The business transaction rules of the conversion are established in the quantity conversion type. The conversion type is a combination of different parameters (conversion factors, source and target units of measure) that determine how the conversion is performed. In terms of functionality, quantity conversion is structured similarly to currency translation. Quantity conversion allows you to convert key figures with units that have different units of measure in the source system into a uniform unit of measure in the BI system when you update them into InfoCubes. Data Transfer Process : You use the data transfer process (DTP) to transfer data within BI from a persistent object to another object in accordance with certain transformations and filters. In this respect, it replaces the InfoPackage, which only loads data to the entry layer of BI (PSA), and the data mart interface. The data transfer process makes the transfer processes in the data warehousing layer more transparent. Optimized parallel processing improves the performance of the transfer process (the data transfer process determines the processing mode). You can use the data transfer process to separate delta processes for different targets and you can use filter options between the persistent objects on various levels. For example, you can use filters between a DataStore object and an InfoCube. Data transfer processes are used for standard data transfer, for real-time data acquisition, and for accessing data directly. The data transfer process is available as a process type in process chain maintenance and is to be used in process chains. ETL Error Handling : The data transfer process supports you in handling data records with errors. The data transfer process also supports error handling for DataStore objects. As was previously the case with InfoPackages, you can determine how the system responds if errors occur. At runtime, the incorrect data records are sorted and can be written to an error stack (request-based database table). After the error has been resolved,

Page 100: BW - Prepared

100 | P a g e B W I V I E W

you can further update data to the target from the error stack. It is easier to restart failed load processes if the data is written to a temporary store after each processing step. This allows you to determine the processing step in which the error occurred. You can display the data records in the error stack from the monitor for the data transfer process request or in the temporary storage for the processing step (if filled). In data transfer process maintenance, you determine the processing steps that you want to store temporarily. InfoPackages : InfoPackages only load the data into the input layer of BI, the Persistent Staging Area (PSA). Further distribution of the data within BI is done by the data transfer processes. The following changes have occurred due to this: · New tab page: Extraction -- The Extraction tab page includes the settings for adaptor and data format that were made for the DataSource. If data transfer from files occurred, the External Data tab page is obsolete; the settings are made in DataSource maintenance. · Tab page: Processing -- Information on how the data is updated is obsolete because further processing of the data is always controlled by data transfer processes. · Tab page: Updating -- On the Updating tab page, you can set the update mode to the PSA depending on the settings in the DataSource. In the data transfer process, you now determine how the update from the PSA to other targets is performed. Here you have the option to separate delta transfer for various targets. For real-time acquisition with the Service API, you create special InfoPackages in which you determine how the requests are handled by the daemon (for example, after which time interval a request for real-time data acquisition should be closed and a new one opened). For real-time data acquisition with Web services (push), you also create special InfoPackages to set certain parameters for real-time data acquisition such as sizes and time limits for requests. PSA : The persistent staging area (PSA), the entry layer for data in BI, has been changed in SAP NetWeaver 2004s. Previously, the PSA table was part of the transfer structure. You managed the PSA table in the Administrator Workbench in its own object tree. Now you manage the PSA table for the entry layer from the DataSource. The PSA table for the entry layer is generated when you activate the DataSource. In an object tree in the Data Warehousing Workbench, you choose the context menu option Manage to display a DataSource in PSA table management. You can display or delete data here. Alternatively, you can access PSA maintenance from the load process monitor. Therefore, the PSA tree is obsolete. Real-Time Data Acquisition : Real-time data acquisition supports tactical decision making. You use real-time data acquisition if you want to transfer data to BI at frequent intervals (every hour or minute) and access this data in reporting frequently or regularly (several times a day, at least). In terms of data acquisition, it supports operational reporting by allowing you to send data to the delta queue or PSA table in real time. You use a daemon to transfer DataStore objects that have been released for reporting to the ODS layer at frequent regular intervals. The data is stored persistently in BI. You can use real-time data acquisition for DataSources in SAP source systems that have been released for real time, and for data that is transferred into BI using the Web service (push). A daemon controls the transfer of data into the PSA table and its further posting into the DataStore object. In BI, InfoPackages are created for real-time data acquisition. These are scheduled using an assigned daemon and are executed at regular intervals. With certain data transfer processes for real-time data acquisition, the daemon takes on the further posting of data to DataStore objects from the PSA. As soon as data is successfully posted to the DataStore object, it is available for reporting. Refresh the query display in order to display the up-to-date data. In the query, a time stamp shows the age of the data. The monitor for real-time data acquisition displays the available daemons and their status. Under the relevant DataSource, the system displays the InfoPackages and data transfer processes with requests that are assigned to each daemon. You can use the monitor to execute various functions for the daemon, DataSource, InfoPackage, data transfer process, and requests.

Page 101: BW - Prepared

101 | P a g e B W I V I E W

Archiving Request Administration Data : You can now archive log and administration data requests. This allows you to improve the performance of the load monitor and the monitor for load processes. It also allows you to free up tablespace on the database. The archiving concept for request administration data is based on the SAP NetWeaver data archiving concept. The archiving object BWREQARCH contains information about which database tables are used for archiving, and which programs you can run (write program, delete program, reload program). You execute these programs in transaction SARA (archive administration for an archiving object). In addition, in the Administration functional area of the Data Warehousing Workbench, in the archive management for requests, you can manage archive runs for requests. You can execute various functions for the archive runs here. After an upgrade, use BI background management or transaction SE38 to execute report RSSTATMAN_CHECK_CONVERT_DTA and report RSSTATMAN_CHECK_CONVERT_PSA for all objects (InfoProviders and PSA tables). Execute these reports at least once so that the available request information for the existing objects is written to the new table for quick access, and is prepared for archiving. Check that the reports have successfully converted your BI objects. Only perform archiving runs for request administration data after you have executed the reports. Flexible process path based on multi-value decisions : The workflow and decision process types support the event Process ends with complex status. When you use this process type, you can control the process chain process on the basis of multi-value decisions. The process does not have to end simply successfully or with errors; for example, the week day can be used to decide that the process was successful and determine how the process chain is processed further. With the workflow option, the user can make this decision. With the decision process type, the final status of the process, and therefore the decision, is determined on the basis of conditions. These conditions are stored as formulas. Evaluating the output of system commands : You use this function to decide whether the system command process is successful or has errors. You can do this if the output of the command includes a character string that you defined. This allows you to check, for example, whether a particular file exists in a directory before you load data to it. If the file is not in the directory, the load process can be repeated at pre-determined intervals.

Repairing and repeating process chains : You use this function to repair processes that were terminated. You execute the same instance again, or repeat it (execute a new instance of the process), if this is supported by the process type. You call this function in log view in the context menu of the process that has errors. You can restart a terminated process in the log view of process chain maintenance when this is possible for the process type. If the process cannot be repaired or repeated after termination, the corresponding entry is missing from the context menu in the log view of process chain maintenance. In this case, you are able to start the subsequent processes. A corresponding entry can be found in the context menu for these subsequent processes. Executing process chains synchronously : You use this function to schedule and execute the process in the dialog, instead of in the background. The processes in the chain are processed serially using a dialog process. With synchronous execution, you can debug process chains or simulate a process chain run. Error handling in process chains :

Page 102: BW - Prepared

102 | P a g e B W I V I E W

You use this function in the attribute maintenance of a process chain to classify all the incorrect processes of the chain as successful, with regard to the overall status of the run, if you have scheduled a successor process Upon Errors or Always. This function is relevant if you are using metachains. It allows you to continue processing metachains despite errors in the subchains, if the successor of the subchain is scheduled Upon Success. Determining the user that executes the process chain : You use this function in the attribute maintenance of a process chain to determine which user executes the process chain. In the default setting, this is the BI background user. Display mode in process chain maintenance : When you access process chain maintenance, the process chain display appears. The process chain is not locked and does not call the transport connection. In the process chain display, you can schedule without locking the process chain. Checking the number of background processes available for a process chain : During the check, the system calculates the number of parallel processes according to the structure of the tree. It compares the result with the number of background processes on the selected server (or the total number of all available servers if no server is specified in the attributes of the process chain). If the number of parallel processes is greater than the number of available background processes, the system highlights every level of the process chain where the number of processes is too high, and produces a warning. Open Hub / Data Transfer Process Integration : As of SAP NetWeaver 2004s SPS 6, the open hub destination has its own maintenance interface and can be connected to the data transfer process as an independent object. As a result, all data transfer process services for the open hub destination can be used. You can now select an open hub destination as a target in a data transfer process. In this way, the data is transformed as with all other BI objects. In addition to the InfoCube, InfoObject and DataStore object, you can also use the DataSource and InfoSource as a template for the field definitions of the open hub destination. The open hub destination now has its own tree in the Data Warehousing Workbench under Modeling. This tree is structured by InfoAreas. The open hub service with the InfoSpoke that was provided until now can still be used. We recommend, however, that new objects are defined with the new technology.