Quantcast
Channel: SAP BW Powered by SAP HANA
Viewing all 151 articles
Browse latest View live

HANA Optimized BW Content – ASUG Webcast Part 1

$
0
0

SAP’s Brian Wood and Natascha Marenfield gave this ASUG webcast last month.

1fig.png

 

Figure 1: Source: SAP

 

Business Content is part of the BW offering.  It gives value immediately and “shortens time to value”.

 

It can be customized in one way or another

 

However it has grown the application suite over years – data models like ECC/Finance/Controlling – those data models aren’t harmonized.

 

SAP wants to builds models that are harmonized across applications

 

They want to provide a baseline for your development

 

Going forward with HANA to have the ability to add real-time or near real-time data to analysis within BW

 

It provides a framework for real-time and historical data analysis

 

It includes best practices for mixed scenarios – BW InfoProviders & custom HANA views

 

Parts of business suite are not transparent – implements semantics of suite and provides transparency

 

Business content is at help.sap.com web site

 

Smart Data Access allows you to see data in another system without loading to BW – use within BW on HANA

2fig.png

Figure 2: Source: SAP

 

Figure 2 shows the goal of new business content: to provide a baseline for benefits for both business users and IT

 

Business users get a baseline that allows for flexible and real-time analytics as well as traditional historical reporting

 

It provides a higher level of detail in HANA

 

It has the ability to create complex new key figures

 

Predictive library, text analysis library, business function libraries are not in a typical relational database management system

 

It combines departmental data with enterprise data as well

 

BW workspaces give business users ability to upload their data and join that with enterprise data

 

It provides patterns for sustainable architecture – LSA updated for BW on HANA

 

HANA analysis processes in BW that allow you to leverage predictive library

3fig.png

Figure 3: Source: SAP

 

What are main considerations for design? New business content optimized – follows LSA++ for BW on HANA

 

New content provides a higher level of detail like line items

 

It contains some mixed scenarios with both HANA and BW capabilities – customers get an idea of how use technology for their content area

 

It provides some optimized transformations and push down to HANA engine

 

It offers more flexibility in data acquisition and reporting

 

They want to use consolidated InfoObjects – been around in business content for a while.  The older content areas haven‘t included

 

They took those InfoObjects in that model for harmonizing master data

 

Only visible if you turn on a special business function – own namespace – /IMO/

4fig.png

Figure 4: Source: SAP

 

Figure 4 shows patterns for mixed scenarios and how to take advantage of HANA database underneath

 

Customer has BW on HANA and also uses HANA to do real time reporting of key figures loaded to HANA via SLT – the BW can give him the benefit of real time reporting data

 

An example is a line item point of sale system to analyze shopping basket during campaigns

 

5fig.png

Figure 5: Source: SAP

 

Figure 5 shows the architecture for consumption of HANA, using a virtual provider – let HANA analytical view – use it in a query and use it in a query

 

Security is handled by BW

 

SAP said it passes data through BW world in a transparent way

 

6fig.png

Figure 6: Source: SAP

 

For “last and past” the idea is that customer uploads historical data in BW system in the finance area and wants data real-time.  They built idea of customer loads data in HANA for real-time – via extractor of data in BW side

7fig.png

Figure 7: Source: SAP

 

Multi provider to union data with Inventory Cube or DSO

8fig.png

Figure 8: Source: SAP

 

Figure 8 shows “Power of HANA for flexible calculation of complex key figures“

9fig.png

Figure 9: Source: SAP

 

You have your data on BW on HANA in different DSO’s for example

 

From DSO you can generate an analytics view from the schema side

 

Data stays in BW schema

 

Additional data model is generated from definition of DSO

 

Output is consumed via a Virtual Provider

 

10fig.png

Figure 10: Source: SAP

 

Started with a sales and distribution with old SD area, focusing on core business processes

 

This is delivered BI content 737 SP4 and 747 SP4

 

In 757, take advantage of new features that BW 740 – like composite provider

11fig.png

Figure 11: Source: SAP

 

In the last SP SAP added parts of AP, Finance, AR, and two areas in Controlling

 

To be continued..

 

Related:

Add these ASUG BW / HANA sessions to your Annual Conference Agenda


If you are an ASUG member, consider registering for the following upcoming webcasts:


HANA Optimized BW Content – ASUG Webcast Part 2

$
0
0

SAP’s Brian Wood and Natascha Marenfield gave this ASUG webcast last month.  Part 1 is here HANA Optimized BW Content – ASUG Webcast Part 1

1fig.png

Figure 1: Source: SAP

 

Figure 1 provides more details, which patterns are used in which areas

 

For this SAP is only deliver a few queries as most customers build their own queries

2fig.png

Figure 2: Source: SAP

 

Data flows of new content are shown in Figure 2 for reference.

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows 5 areas for purchasing

4fig.png

Figure 4: Source: SAP

 

Last and past – historical Accounts Receivable is loaded to BW and customer can load latest AR items real time to HANA and these are merged to provide

current outstanding and overdue analysis

5fig.png

Figure 5: Source: SAP

 

Figure 5 is an example of “last and past”

6fig.png

Figure 6: Source: SAP

 

With LSA++ cubes are needed anymore as you can query directly on DSO level in the business content

7fig.png

Figure 7: Source: SAP

 

Figure 7 shows what the previous figures show – reporting is based on DSO’s instead of cubes.

8fig.png

Figure 8: Source: SAP

 

Figure 8 shows SD business content based on cubes.

9fig.png

Figure 9: Source: SAP

 

Figure 9 shows only doing source-specific transformations, with the multi-providers on top

10fig.png

Figure 10: Source: SAP

 

Business content is a real value-add.  It is extended to take advantage of new HANA-based capabilities

 

For reporting you go against the DSO’s instead of InfoCubes

 

Now it has persisted InfoProviders which are stored in database as storing in cubes is redundant, resulting in the number of layers reduced

 

Links:

SAP Help – BI Content 7.47 / BI Content 7.37 SP 07 Documentation:BI Content & BI Content Extensions - SAP Library

 

Related:

Join us in May for ASUG Annual Conference   - Pre-Conference SAP BusinessObjects BI4.1 with SAP BW on HANA and ERP Hands-on – Everything You Need in One Day June 2nd - Register at: ASUG Preconference Seminars

 

Add these ASUG BW / HANA sessions to your Annual Conference Agenda

If you are an ASUG member, consider registering for the following upcoming webcasts:


Share your Story: Call for Sessions for ASUG at SAP d-code (former TechEd)

Share your knowledge with others and submit a proposal to speak at SAP d-code. Selected proposals will be part of the ASUG and SAP d-code: Partners in Education program, providing attendees with interactive learning experiences with fellow customers.


View the education tracksplanned this year.  If selected, you will receive a complimentary registration for the conference and it will give you valuable professional exposure.


Follow this linkto create a speaker account where you can formally submit your proposal, review important deadlines, and other general information about SAP d-code.


The deadline to submit your abstract is May 25. If you have any questions, please e-mailsapdcodespeaker.info@sap.com

Estimation for BW on HANA Migration ( Landscape Migration)

BW Pre and Post Upgrade and Migration Tasks

$
0
0

BW Pre and Post Upgrade and Migration Tasks:

 

Normally when there is BW system upgrade or Migration to HANA database from XDB, we need to perform some pre and post upgrade and migration steps manually. For this need to go to each of the transactions and execute them providing the required selection inputs.

Instead of executing the tasks manually for upgrade and migration, SAP has delivered task lists which automate most of the BW tasks required for system upgrade and migration.

The below steps contain how to implement and execute the upgrade and migration tasks.

 

Creation of the task lists.

 

First check for the Upgrade and migration task lists (SAP_BW_BEFORE UPGRADE, SAP_BW_AFTER UPGRADE, SAP_BW_BEFORE MIGRATION, SAP_BW_AFTER MIGRATION) in the transaction code STC01. If these task lists does not exist then proceed further to implement the task lists.

 

Please check the SAP note 1734333 which contains the necessary coding required for the task list creation to facilitate a BW system Upgrade and Migration. Also check for the validity and prerequisites in the SAP note. Have attached the task list creation coding files below.

 

By implementing the following manual steps, four task lists will be created: SAP_BW_BEFORE UPGRADE, SAP_BW_AFTER UPGRADE, SAP_BW_BEFORE MIGRATION, SAP_BW_AFTER MIGRATION. Please be aware that these tasks lists are intended to ease some of the manual tasks required by users. They do not fully replace the execution of ALL tasks required during the different phases of an Upgrade or Migration. As such they should beused in combination with available upgrade guides.

 

Manual steps in the SAP note (1734333) to create the task lists creation:

 

1) Download the attached file ZNOTE_1734333_POST_70X.txt or ZNOTE_1734333_POST_73X.txt or ZNOTE_1734333_POST_74X.txt based on the BW release.

2) Create a report 'ZNOTE_1734333_POST' in transaction se38 (ABAP Editor). In the attributes Pop-up enter some description and choose type as 'Executable program' . Save the program as local temporary object.

 

Capture1.PNG

 

3) Copy and paste the code from the file into the ABAP editor. Save and activate the report.

 

4) Execute the report. The program will automatically create several objects like Package, Function Group, Error messages, Task lists and will prompt you for a transport request. Please use the same transport request as for the rest of the manual activities and the corrections of the note. You do not need to transport the report you have created, but the objects, which were generated by the report.

First execute the report in TESTRUN mode, to check for any missing prerequisites or errors. Then after execute in UPDATE mode. This report will create the task list for Upgrade and Migration.

 

Capture2.PNG

 

5) Go to STC01 and check for the taks lists .

 

Capture3.PNG

 

Execution of Task lists

 

SAP_BW_BEFORE_UPGRADE

 

1)      1) Go to STC01 transaction code and choose the task and display.

 

Capture_before_upgrade.PNG

 

   Documentation is available for each task listed in a task list . Click on button  Capture5.PNGfor detailed documentation on the task.

 

 

Capture4.PNG

 

2)  Click on edit button to add or delete tasks and also can change the order of execution of the tasks . But before changing the order of the tasks please check for the dependencies of the tasks in the documentation of each task.

 

Capture6.PNG

 

3) Click on execute button. Then we need to fill the parameters required.

 

 

Capture7.PNG

 

Fill the parameters

 

Capture_param.PNG

 

4) Have options to execute the tasks in foreground or in background.

 

Capture8.PNG

 

5) When we uncheck the mandatory tasks to skip, information will be shown below .

 

Capture_mandate.PNG

 

SAP_BW_BEFORE UPGRADE should be executed as when the ASU toolbox prompts the user to action from within Software Update Manager.

The complete execution of the SAP_BW_BEFORE UPGRADE task list is split into two phases. When the task list is initially triggered, all tasks listed prior to the 'Confirm all end users are currently locked' task are executed. The remaining tasks require that no further end-user action take place on the system and must therefore be completed when the 'user lock' phase of the upgrade is reached.

As such, the administrator must return the SAP_BW_BEFORE UPGRADE task list run and manually confirm the 'Confirm all end users are currently locked' task so that the remaining tasks listed in the SAP_BW_BEFORE UPGRADE task list can run to completion.

 

Capture_lock.PNG

 

SAP_BW_AFTER _UPGRADE

 

SAP_BW_AFTER UPGRADE  should be executed when the ASU toolbox prompts the user to action from within Software Update Manager following the upgrade.

 

Capture9.PNG

 

SAP_BW_BEFORE MIGRATION

 

SAP_BW_BEFORE MIGRATION, should be executed at the beginning of migration that is prior to the export.

 

Capture_before_mig.PNG

SAP_BW_AFTER MIGRATION

 

SAP_BW_AFTER MIGRATION, should be executed at the same time point as one would execute the report RS_BW_POST_MIGRATION and can be used to replace the use of this report all together.

 

Capture_after_mig.PNG

 

Information from SAP NOTE 1734333

.

The utilization of the task lists is only meaningful, if you plan on doing an Upgrade without Migration to SAP HANA or a Migration to SAP HANA without an Upgrade. If however you are planning to perform an upgrade along with a DB migration to SAP HANA, then the tool of choice should be the Database Migration Option in Software Update Manager, which is currently in piloting phase.

 

Thanks

What do you want in 2014?

$
0
0

We  know your time is valuable, and that is why we want to hear from you, on what you want in 2014!

 

Please take one-minute of your day and complete a quick customer  survey to gauge your opinion on targeted customer webinars for the coming  year. We would like to find where your focus will be and what is important to you in terms of data warehousing and analytics.

 

We will then use your response to analyze opinions about different  customer offerings in 2014. Click here to take the survey.

 

Thanks for helping us serve you better!

BW on HANA Migration : Table config. for Index is not valid

$
0
0

We have encountered the issues with Column View generation for Info objects after BW on HANA migration (SAP 7.31 SP8, HDB 68 rev)

 

User was not able to choose the F4 help to select the required input when executing the report

Blog.jpg

We also couldn't choose the input from IO table

Blog.jpg

 

We were encountering the issue for almost all the info objects

 

Well there is a temporary Manual Workaround to create column views by using program RSDDB_LOGINDEX_CREATE for an single info object

Blog.jpg

After we create the column view using the above program we can see the same

Blog.jpg

We have ran the RS_BW_POST_MIGRATION as part of Post-Migration activity with all the HANA database specific tasks, however we had the issue of table config issue.


To fix the issue we have to implement the correction note "1869218 - Column view generation for InfoObject " to re-generate the column views

 

After implementing the note, Execute the report "RS_BW_POST_MIGRATION" to generate column views will then be created for all Info objects.

Blog.jpg

This has fixed the issue and we were able to see all the column views were generated.


Thanks !!!

Performance tuning for Process Chain in BW on HANA environment

$
0
0

Performance tuning for Process Chain in BW on HANA environment

Process improvement of the process chain after the migration to BW on HANA environment.  The current process chain for analysis process designer (APD) was taking longer duration to execute. In the process of investigation it was noticed that SAP has released a note 1224318.

The members of each dimension are determined in the "Values in Master Data Table" read mode.  As a result, the posted values are not used for filtering.  The number of members may be higher than before. This logic also applies for navigation attributes.  Only the values in the master data ID (SID) table are taken into account. The entries in the attribute SID table are ignored.  This parameter value may increase performance. However, the number of read members may increase accordingly.

AFTER MIGRATION to BW ON HANA: APD is taking longer time than usual

After the migration to SAP BW ON HANA, the process chain was taking longer time than usual. Before Migration the process chain was taking roughly 18 minutes to complete. After the Migration, the process chain it took around 50 minutes to complete.

 

After analyzing the issue, it seems that the SQL sent to HANA uses inappropriate engine due to the HINT (OLAP_PARALLEL_AGGREGATION)

As per the Note: 1224318, need to set the RSADMIN parameter MDX_JOIN_CUBE_DIME=A.

Use the transaction SE38: SAP_RSADMIN_MAINTAIN

 

RSADMIN.PNG

After Migration and Maintaining the RSADMIN Parameter as per the SNOTE: 1224318. The process chain took 18 minutes to complete.

InfoObject modeling in BW 7.4 (more than 60 characters)

$
0
0

In the BW Releases before 7.4, the maximum length of a characteristic value is limited to 60 characters. As of Release 7.4 SPS2, up to 250 characters are possible. To achieve this, the domain RSCHAVL was changed from CHAR 60 to SSTRING 1333. As a result, data elements that use the domain RSCHAVL are "deep" types in an ABAP context.

 

Texts with a length of up to 1,333 characters are possible for characteristic values. For this, the structure RSTXTSMXL was created which is a "deep" type in an ABAP context. In the internal method interfaces and function module interfaces that handle the texts of characteristic values, the type RSTXTSML was replaced with RSTXTSMXL. However, the RSRTXTSML structure remains unchanged and is required for the description of metadata.


Example creation of ZVENDOR:

Observe attribute Legal Status allowed up to 250 characters. 

pic1.jpg


For long test more than 60 chars, select Long text. Which enables Long text is XL, enable it.

pic1.jpg

Text Table

pic1.jpg


Domain RSCHAVL was changed from CHAR 60 to SSTRING 1333.

pic1.jpg

Structure RSTXTSML was replaced with RSTXTSMXL. However, the RSRTXTSML structure remains unchanged and is required for the description of metadata.

 

RSTXTSML


pic1.jpg


RSTXTSMXL



pic1.jpg






Creating an Open ODS View (field based modeling), for source of type “database table or view”:

$
0
0
There are two ways of creating an Open ODS view in the Data Warehousing Workbench in the BW system and you also have the option of creating Open ODS views in the BW Modeling tools in Eclipse.
Procedure:
Create in the InfoProvider Tree:
In the InfoProvider tree in the Data Warehousing Workbench, select the InfoArea that you want to assign the new Open ODS view to and choose Create Open ODS View in the InfoArea's context menu. The dialog box for creating Open ODS views appears.
pic1.jpg
Under View Name, enter a technical name for the Open ODS view. 
Enter a description for the Open ODS view under Long Description.
Select the semantics (facts, master data, and texts). 
Select the type (DataSource (BW), database table or view, Virtual Table Using HANA Smart Data Access).
For sources of type database table or view:
  1. a. Under DB Object Schema, select the
    schema where the table or view is located, which you want to use as the source object for the Open ODS view. You can choose from the schemas of the SAP HANA database, which the BW system runs on.
  2. b. A DB Connect source system is used to access the data in the source. The schema is specified in the configuration of the DB Connect source system. If a source system has already been defined for the DB object schema, this system is displayed in the Source System field. If there is no source system for the schema, it is created when the Open ODS view is created. In this case, the source system field displays a suggested name that you can overwrite. The proposal is derived from the schema name.
  3. c. In the DB Object Name field, select a table or view as the source object of the Open ODS view.
pic2.jpg
  pic3.jpg
In the Open ODS view maintenance screen, you can generate the structure of the Open ODS view on the semantics tab by assigning the fields of the source object to field categories (example: for facts, assign to the categories characteristic key, characteristic, key figure and others).
To perform an automatic assignment of source fields to field categories of the Open ODS view, select the Create Proposal button. The system generates the proposal based on the source type. Alternatively, you can drag source fields into the field categories of the Open ODS view.
pic4.jpg
General Field Properties:
Field Name, Source Field, Long Description and Global Name.
The global field name is the name used as the InfoObject name in the InfoProvider of the Open ODS view. The global name is initially determined from the ODP name and the name of the Open ODS view field.  
Properties for Fields of Type "Characteristic" :
Authorization Relevance, Display, Query Exec-Filter Val and Association.
Using associations on fields of type "characteristic", you can link an Open ODS view to master data and texts of other Open ODS views or InfoObjects. This makes it possible to inherit properties - such as the global name, the authorization relevance or reporting properties of the associated object - and use texts and navigation attributes of the associated objects in the InfoProvider of the Open ODS view. Initially there are no associations.
Properties for Fields of Type "Key Figure":
Aggregation, Currency/Unit, Reporting Properties.
Using associations on fields of type "key figure", you can link an Open ODS view with InfoObjects. This allows you, for example, to use formulas and variables (defined on the associated InfoObject) in queries on the Open ODS view and inherit properties (such as the global name, aggregation or reporting properties) from the associated object. Initially there are no associations.
pic5.jpg
Display data:
pic6.jpg
Open ODS view can be used in EDW modeling. Click on generate dataflow icon, it will create standard DSO, transformations and DTP.
pic7.jpg
pic8.jpg

Master Data load performance issues after HANA migration

$
0
0

Master Data load performance issues after HANA migration

After BW on HANA Migration, we encountered serious performance problems with Master data loads. Example for 0MAT_SALES data load which used to complete in less than 30 minutes after the migration it was taking around 12 hours to complete for almost same number of records.

In a BW system with SAP HANA database, you observe poor load performance during the update of the master data attributes, although block operations are configured for the DB operations. The performance of FOR ALL ENTRIES statements in the SAP HANA database is poor in time independent master data update. FAE hint is not created with equi-join.

 

 

After the analysis these SAP Notes were implemented with the help of basis team: 1919804 and 1891981. The process of implementation done to fix the issue has been explained below with some screenshots.

After the SAP notes have been implemented successfully call transaction SE91 and create the new message number 012 in message class RSDMD: "DB views missing for InfoObject &1. Array-insert cannot be enabled." Enter a long text for the new message.

Enter the following text in the Procedure section: "Reactivate the InfoObject to recreate the DB

views in order to enable Array-Insert for better load performance."

 

RSDMD.PNG

 

Display.JPG

 

 

Message.JPG

Message1.JPG

 

 

 

In the screenshot below you can see the time the master data load has taken before the migration.

 

Before Migration.JPG

 

After Migration to BW ON HANA: The DTP ran for 12 hours.

 

After Migration.JPG

 

 

 

After implementation of notes, The DTP, 0MAT_SALES from ZMAT_SALES_VENDOR_ATTR has completed in 18 min’s.

 

After Note Implementation.JPG

Benefits of #BWonHANA

$
0
0

There are many materials out there on SCN discussing the benefit of having BW running on HANA as database. There is even its own space dedicated to SAP NetWeaver BW Powered by SAP HANA. In this post I'd like to take a look on these benefits from pure BW point of view. My motivation is to have arguments for my clients while considering migration of their BW systems to HANA database. Assumption here is that no other change/optimization is done while migrating from current DB to HANA DB.

I hope I captured those topics discussed below right. However my knowledge of HANA / “BW on HANA” is just theoretical at this time. Therefore I appreciate your comments and/or correction to my findings.

 

1. New in-memory DB

Once you migrate your BW from the current DB to HANA DB you get basically new in-memory DB and all its features right away. This means that without any reimplementation of your existing data flows you can use power of in-memory HANA engine.

As HANA is in-memory DB any aggregates, indices and other materialized views on data are not needed anymore in BW system in most cases. Means administration and maintenance of whole BW system is easier.

HANA tears down very consuming DB operations like data loading, DSO activation by its in-memory nature. I/O operations are faster as it is in-memory DB. Similarly no rollups on cubes after cube is loaded are needed. Also no Attribute Change Runs (ACR) while master data were changed are not necessary.

 

2. Data Flows/Transformations

There is no need to migrate your BW 3.x style data flows to 7.x style to run the BW system on HANA DB. Notice that 7.x data flow is mandatory for HANA-optimized InfoProviders only. Regarding existing transformations their certain parts of standard data loading process in BW is accelerated by HANA. Especially in BW 7.4 runs a standard transformation differently than it does in older releases of BW. System pushes down the transformation processing to HANA database. However this is only valid for transformations where no custom ABAP routines are used.

 

3. InfoProviders

By running BW on HANA you get following InfoProvider types. These are not new types of InfoProvs but they are optimized to be used on HANA.


HANA optimized DSO - notice that even this is new term “HANA optimized DSO” it already became obsolete. Earlier the DSO could have been converted into this type of DSO after migration to HANA. This is not the case anymore. As of NetWeaver BW 7.30 Support Package 10 HANA-optimized activation is supported for all Standard DataStore Objects by default. So no conversion needed to Standard DSOs.

With respect to different Support Pack there are following architectures of DSO:

  1. As of BW 7.30 SP05-09: Change Log of DSO is provided by HANA’s Calculation view. This means there is no persistency of data. This speeds up data activation and SID creation.
  2. As of BW 7.30 SP10: There is database table for Change log. By this we gain performance while loading the data from the DSO to further InfoProvs as less resource and memory consumption is achieved.

More information can be found here: DataStore Objects in SAP NetWeaver BW on SAP HANA

 

HANA optimized infocubes - Within classic BW infocubes there are 2 fact tables (F - normal one and E - compressed one) and several dimension tables as per cube setup. HANA optimized cubes are flat, there is no dimension tables and there is only one F table for facts. This means info cubes running on HANA gain faster data loads, their data model is simplified, remodeling is easier (e.g. while adding/removing new characteristics/key figure), no changes to cubes after migration to HANA.

Within BW on HANA cubes are become even less relevant from data storage perspective. In case there is no any business logic in place between DSO and cube layer there is no need to have cube layer. Report can run directly on top of the DSOs. Of course this needs to be approached by checking data flow one by one. If this is the case data model gets simplified. Be aware that there are still cases there cubes are needed. Just to name a few: usage of non-cumulative key figures in cube, external write access to cube, integrated planning.

More information can be found here: InfoProviders in SAP NetWeaver BW powered by SAP HANA

 

4. New InfoProviders as of BW 7.3

A bunch of new InfoProv types were introduced in BW version 7.3. Let’s see how they are supported while BW runs on HANA.


Semantic Partitioned Object (SPO)– SPO is used to store very large volumes of data as per partition defined based on business object. There are two cases depending weather SPO is based on DSO or on cube. In case of the cube it gets automatically HANA optimized. In case of DSO you may want to convert SPO to HANA optimized see note: 1685280 - SPO: Data conversion for SAP HANA-optimized partitions.


CompositeProvider– Enables combination of InfoProviders of type cube, DSO and Analytic Indexes (like BWA and Analysis Process Designer (ADP)) via UNION, INNER and LEFT OUTER JOIN. Such a scenario runs faster in BW on HANA as UNION/JOIN operations are executed in HANA and not on application server.


HybridProvider– Used for modeling for near real-time data access. It is combination of two InfoProv: one for historic data (e.g. cube) and other one for actual real time data (e.g. DSO loaded via Real Time Data Acquisition (RDA) type of DataSource). Here same rules apply as mentioned above for cube and DSO: in case of cube it is automatically HANA-optimized and in case of DSO it stays standard as it was before HANA migration.


VirtualProvider – either based on: Data Transfer Process (DTP), BAPIs or function module are used for e.g. reconciliation purposes of the data loaded in BW with a normal staging data flow and the source system. Such aVirtualProv runs in BW on HANA environment as well.

Other case within connection to VirtualProv can be with reference to HANA model. By this HANA model e.g. analytic or calculation view is exposed to BW’s InfoProv.


TransientProvider– As it has no any persistent BW metadata (nothing visible in BW’s Data Warehouse Workbench) there is nothing to be optimized by HANA. Actually TransientProv is used to consume a HANA model (Analytic or Calculation View) which is published into BW (transaction RSDD_HM_PUBLISH). So if you have some scenarios with TransientProv it should work in BW on HANA as well.


Analytic Index (AI) – Is data container in BWA stores the data in simply star schema format as facts and characteristics (dimensions) with attributes. The data for AI is prepared by Analysis Process Designer (APD).

Moreover while connecting of AI to TransientProvider: HANA model can be published in the BW as AI. TransientProvider is generated then on this AI. While having scenario where data is being changed very frequently; HANA model is changed also the AI is adjusted automatically.


Snapshot Index (SI) – If BEx query is marked as InfoProvider in BWA an index called Query Snapshot Index (QSI) is created. Such a SI for Query as InfoProv and SI for Virtual Prov are still supported in BW on HANA.

 

5. Process Chains

There are few process types that are obsolete in BW system running on HANA. These are Attribute Change Runs (ACR), aggregate roll-ups, cube roll-ups, cube’s index deletion/creation before/after the load. Existing chains having these processes will run without errors just those processes will not be executed. However clean-up is advised to be done after the migration to HANA.

 

6. Queries

BEx queries stay as they are and no change is needed. While query runtime HANA is leveraging column store and in-memory calculations as engine for query acceleration. The data is not replicated (e.g. in case of aggregate or BWA) – the query runs directly against primary data persistence.

Therefore queries should run at least as fast before HANA migration in BWA but better runtime is of course anticipated without any changes to queries itself.

 

7. Planning

When it comes to SAP BW’s planning application they traditionally run in BW’s application server. While HANA in place; planning functions are running in-memory. Therefore with no change on planning models, planning processes a performance boost is expected in BW on HANA in areas: dis/aggregation, copy, delete, set value, re-post operation, FOX formulas, conversions, revaluation etc.

 

8. Authorization

Authorization and all activities related to user access are managed by BW application. Therefore nothing has changes here while migration to HANA DB. All authorization concepts used before are being valid and used. Going forward if you will be using also purely HANA objects (e.g. HANA models: attribute/analytic/calculation views) these are managed by HANA privileges. They are less detailed comparing to BW authorization therefore if you need complex authorization you need to consume HAMA models via BW’s InfoProviders like Transient or Virtual one.

Notice that authorization must be already using BW 7.x technology prior DB migration to HANA.

 

 

Other sources of information on BW on HANA:

SAP BW Powered by HANA – What’s In It For The Business User

Migration to SAP NetWeaver Business Warehouse on SAP HANA – Best Practice Update 2014

SAP NetWeaver BW Powered by SAP HANA

3 major reasons to migrate to BW 74 on HANA

SAP NetWeaver® Business Warehouse: Powered by SAP HANA™

SAP NetWeaver BW Powered by HANA

Some OSS Notes and Errors with solutions useful for a BW on HANA Project

$
0
0

Hi Friends,

 

I thought of sharing some Errors with solutions and OSS Notes useful for a BW on HANA system.

 

  1. While trying to maintain some master data using 'Maintain Master Data' option in BW, we encountered the following error:

     1.png

Solution:  The issue was resolved using following SAP note:

1988201 - dump DYN_CALL_METH_CLASS_NOT_FOUND when starting a Webdynpro Application

 

Basis team had to activate following 4 components in 'WebDynpro component / Interface':

FPM_MESSAGE_MANAGER

FPM_IDR_COMPONENT

WDR_MESSAGE_AREA

WDR_SELECT_OPTIONS


   b.  The  statistics server got impacted due to limited memory resources as a result of which no information about the HANA server was available in Alerts section.


2.png

Solution: 1929538 - HANA Statistics Server - Out of Memory


USEFUL OSS NOTES:

1953493 - RSHDB: RSDU_TABLE_CONSISTENCY NW7.30 SP12

1947480 - SAP HANA: Possible data loss during reconversion of DataStores

1909457 - HANADB: Reconversion terminates in truncate step

1891529 - SPO: Reconversion of SAP HANA-optimized DSO partitions

1849498 - SAP HANA: Reconversion of SAP HANA-optimized DataStores

1849497 - SAP HANA: Optimizing standard DataStore objects

1735198 - Regenerate calc view for an SAP HANA-optimized DSO

1764251 - Documentation- Importing BW Models in SAP HANA Modeler

1776186 - SAP HANA BW - Scale out: routing to right indexserver

1682992 - BW-HANA: Query on MultiProvider with "Standard" InfoCube

1656582 - Query terminations - InfoCubes, DSOs, master data in HANA DB

1955508 - Tables of BW objects are not compressed

1908133 - Landscape redistribution - force split rules

1953628 - Size of the DYNPSOURCE table (unused records)

1908075 - BW on SAP HANA SP06: Landscape redistribution

1756099 - RSHDB: Consistency check for tables (7.30 SP9)

1825665 - BW corrections for SAP HANA DB in BW 7.30 SP10

1819123 - BW on SAP HANA SP5: landscape redistribution


I will try to regularly update this blog with more issues/solutions and OSS notes


BR

Prabhith

Real-Time Data with #BWonHANA

$
0
0

Here is a quick summary and a list of some useful pointers for customers requiring real-time data within BW. In general, there is 3 options:

  1. real-time (0 sec*) w/o any replication via transient or virtual infoproviders and / or infoobjects,
  2. real-time (1-5 sec*) via SLT replication,
  3. real-time (1-5 min*) via real-time data acquisition (RDA) enabled extractors.

Marc Hartz's presentationon the options covers details on all these approaches. Let me add a few comments:

  • Approach 1. allows to report on any SQL-accessible table or view in HANA independent from the fact whether that table or view sits in the same schema as all the other BW tables or in any other schema of the same HANA instance or in any other DB server connected via smart data access (SDA). In that sense, it complements the table replication approach via SLT in 2.
  • Approach 2. has three flavours, namelyMechanism a. combines with approach 1. while b. and c. combine with RDA, i.e. approach 3., which exists since BW 7.0.
    1. replicating into a DB table sitting in the same HANA instance that sits under BW, or
    2. replicating into an ODQ (operational delta queue) which serves as a PSA, or
    3. replicating via a web service; with recent versions of SLT and the availability of approach b. this one should be mostly obsolete.
  • Real-time data acquisition (RDA)is supported by a number of standard extractors, e.g. see OSS notes:

Credits go to C. Dressler for compiling most of the details. You can follow me on Twitter via @tfxz. This blog has been cross-published here.

 

* This is the latency between the data being created and the data being visible in a report or dashboard.

HANA Optimized BW Content – ASUG Webcast Part 1

$
0
0

SAP’s Brian Wood and Natascha Marenfield gave this ASUG webcast last month.

1fig.png

 

Figure 1: Source: SAP

 

Business Content is part of the BW offering.  It gives value immediately and “shortens time to value”.

 

It can be customized in one way or another

 

However it has grown the application suite over years – data models like ECC/Finance/Controlling – those data models aren’t harmonized.

 

SAP wants to builds models that are harmonized across applications

 

They want to provide a baseline for your development

 

Going forward with HANA to have the ability to add real-time or near real-time data to analysis within BW

 

It provides a framework for real-time and historical data analysis

 

It includes best practices for mixed scenarios – BW InfoProviders & custom HANA views

 

Parts of business suite are not transparent – implements semantics of suite and provides transparency

 

Business content is at help.sap.com web site

 

Smart Data Access allows you to see data in another system without loading to BW – use within BW on HANA

2fig.png

Figure 2: Source: SAP

 

Figure 2 shows the goal of new business content: to provide a baseline for benefits for both business users and IT

 

Business users get a baseline that allows for flexible and real-time analytics as well as traditional historical reporting

 

It provides a higher level of detail in HANA

 

It has the ability to create complex new key figures

 

Predictive library, text analysis library, business function libraries are not in a typical relational database management system

 

It combines departmental data with enterprise data as well

 

BW workspaces give business users ability to upload their data and join that with enterprise data

 

It provides patterns for sustainable architecture – LSA updated for BW on HANA

 

HANA analysis processes in BW that allow you to leverage predictive library

3fig.png

Figure 3: Source: SAP

 

What are main considerations for design? New business content optimized – follows LSA++ for BW on HANA

 

New content provides a higher level of detail like line items

 

It contains some mixed scenarios with both HANA and BW capabilities – customers get an idea of how use technology for their content area

 

It provides some optimized transformations and push down to HANA engine

 

It offers more flexibility in data acquisition and reporting

 

They want to use consolidated InfoObjects – been around in business content for a while.  The older content areas haven‘t included

 

They took those InfoObjects in that model for harmonizing master data

 

Only visible if you turn on a special business function – own namespace – /IMO/

4fig.png

Figure 4: Source: SAP

 

Figure 4 shows patterns for mixed scenarios and how to take advantage of HANA database underneath

 

Customer has BW on HANA and also uses HANA to do real time reporting of key figures loaded to HANA via SLT – the BW can give him the benefit of real time reporting data

 

An example is a line item point of sale system to analyze shopping basket during campaigns

 

5fig.png

Figure 5: Source: SAP

 

Figure 5 shows the architecture for consumption of HANA, using a virtual provider – let HANA analytical view – use it in a query and use it in a query

 

Security is handled by BW

 

SAP said it passes data through BW world in a transparent way

 

6fig.png

Figure 6: Source: SAP

 

For “last and past” the idea is that customer uploads historical data in BW system in the finance area and wants data real-time.  They built idea of customer loads data in HANA for real-time – via extractor of data in BW side

7fig.png

Figure 7: Source: SAP

 

Multi provider to union data with Inventory Cube or DSO

8fig.png

Figure 8: Source: SAP

 

Figure 8 shows “Power of HANA for flexible calculation of complex key figures“

9fig.png

Figure 9: Source: SAP

 

You have your data on BW on HANA in different DSO’s for example

 

From DSO you can generate an analytics view from the schema side

 

Data stays in BW schema

 

Additional data model is generated from definition of DSO

 

Output is consumed via a Virtual Provider

 

10fig.png

Figure 10: Source: SAP

 

Started with a sales and distribution with old SD area, focusing on core business processes

 

This is delivered BI content 737 SP4 and 747 SP4

 

In 757, take advantage of new features that BW 740 – like composite provider

11fig.png

Figure 11: Source: SAP

 

In the last SP SAP added parts of AP, Finance, AR, and two areas in Controlling

 

To be continued..

 

Related:

Add these ASUG BW / HANA sessions to your Annual Conference Agenda


If you are an ASUG member, consider registering for the following upcoming webcasts:

HANA Optimized BW Content – ASUG Webcast Part 2

$
0
0

SAP’s Brian Wood and Natascha Marenfield gave this ASUG webcast last month.  Part 1 is here HANA Optimized BW Content – ASUG Webcast Part 1

1fig.png

Figure 1: Source: SAP

 

Figure 1 provides more details, which patterns are used in which areas

 

For this SAP is only deliver a few queries as most customers build their own queries

2fig.png

Figure 2: Source: SAP

 

Data flows of new content are shown in Figure 2 for reference.

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows 5 areas for purchasing

4fig.png

Figure 4: Source: SAP

 

Last and past – historical Accounts Receivable is loaded to BW and customer can load latest AR items real time to HANA and these are merged to provide

current outstanding and overdue analysis

5fig.png

Figure 5: Source: SAP

 

Figure 5 is an example of “last and past”

6fig.png

Figure 6: Source: SAP

 

With LSA++ cubes are needed anymore as you can query directly on DSO level in the business content

7fig.png

Figure 7: Source: SAP

 

Figure 7 shows what the previous figures show – reporting is based on DSO’s instead of cubes.

8fig.png

Figure 8: Source: SAP

 

Figure 8 shows SD business content based on cubes.

9fig.png

Figure 9: Source: SAP

 

Figure 9 shows only doing source-specific transformations, with the multi-providers on top

10fig.png

Figure 10: Source: SAP

 

Business content is a real value-add.  It is extended to take advantage of new HANA-based capabilities

 

For reporting you go against the DSO’s instead of InfoCubes

 

Now it has persisted InfoProviders which are stored in database as storing in cubes is redundant, resulting in the number of layers reduced

 

Links:

SAP Help – BI Content 7.47 / BI Content 7.37 SP 07 Documentation:BI Content & BI Content Extensions - SAP Library

 

Related:

Join us in May for ASUG Annual Conference   - Pre-Conference SAP BusinessObjects BI4.1 with SAP BW on HANA and ERP Hands-on – Everything You Need in One Day June 2nd - Register at: ASUG Preconference Seminars

 

Add these ASUG BW / HANA sessions to your Annual Conference Agenda

If you are an ASUG member, consider registering for the following upcoming webcasts:


Share your Story: Call for Sessions for ASUG at SAP d-code (former TechEd)

Share your knowledge with others and submit a proposal to speak at SAP d-code. Selected proposals will be part of the ASUG and SAP d-code: Partners in Education program, providing attendees with interactive learning experiences with fellow customers.


View the education tracksplanned this year.  If selected, you will receive a complimentary registration for the conference and it will give you valuable professional exposure.


Follow this linkto create a speaker account where you can formally submit your proposal, review important deadlines, and other general information about SAP d-code.


The deadline to submit your abstract is May 25. If you have any questions, please e-mailsapdcodespeaker.info@sap.com


BW on HANA: "rsa1 vs. SAP HANA Studio"

$
0
0

Hi,

 

 

with BW on HANA, there are the options to stay more on the "classic" BW-development-path (development and modelling with mainly rsa1 and related transactions, use classic SAP extractors) or to move the development of BW applications and reporting more to the "SAP HANA Studio" path (extract data to HANA DB by replication, build data models there).

 

This post nicely illustrates different archtivetural overviews:

 

SAP BW on HANA  - Architecture options based on source systems

 

 

Decision there of what architecture is recommended was based on the question on which src. systems are used (consider e.g. standard extractors to SAP systems, which we activate/get "for free" in rsa1 suggest to continue more "classic").

 

 

I would be interested in more documents that look at the architecture decision from other points of views, it would be nice if ou can provide them (links):

- Is there any roadmap from SAP that gives detail about the support of the different architectures? Is any of the paths pushed more _with good arguments_ (e.g. no sales slogans that "HANA is the future and HANA Studio is simply innvovative" ;-) )?

- BO 4.1 reporting on SAP BW on SAP HANA: for the different tools (I know, it will be hard to generalize, since tools also are connected to HANA differently, ODBC/JDBC/universes.../type of data in reports, ...), are there significant speed benefits when using one tool with a direct HANA connection/universe than using BICS connection or the other way round (BICS was preferred for SAP BW without HANA...)?

- Any documents how development will become more integrated between the two paths (authorizations on invorprovider vs. analytical views directly in BW/rsa1 )?

- Any other arguments for more "rsa1" or more "SAP HANA Studio" modelling?

- ...

 

 

Best Regards and thanks for your help!

Go Hybrid - SAP HANA Live & SAP BW Data Integration!

$
0
0

SAP HANA Live together with SAP BW provides the best-of-breed suite of analytical services and enables you to drive your business in real-time on the next generation of platform, SAP HANA.

Whether you need: consolidated, de-normalized reporting on data collected from several SAP / non-SAP applications and for data governance OR

real-time reporting on data from one single SAP Business Suite component.
Whether you need: aggregated reporting or on historical data, that might not be online in the underlying SAP Business System OR

bringing reporting within ERP to the next level.

Analytics services provided on the SAP HANA platform will meet and exceed your requirements:


shl+bw_overview.jpg

 

Now let us have a closer look at the data perspective!

When a BW system is running on SAP HANA database, the BW data is stored in a special schema known as the BW-managed schema and is exposed via InfoProviders (e.g. DataStore Objects, InfoObjects etc.). In other SAP HANA schema, data can be stored in SAP HANA tables and accessed via HANA views. You can make data available from any SAP HANA database schema of your choice in BW. You can also make BW data (data from the BW-managed schema in the SAP HANA database) available in a different SAP HANA schema. To do so you can use virtual access methods and data replication methods.

In this blog we will focus on integrating data from the SAP HANA Live schema with corresponding data in BW using the example of ERP – Sales & Distribution. And with that we will also answer the following core questions:

 

  • Will SAP HANA support all my operational, tactical, and strategic reporting requirements?
  • How can I get the reporting functionality I need by combining the strengths of SAP HANA Live and SAP BW on HANA?

 

Overview of Scenarios:

 

Scenario A:
Transactional data provisioning
via HANA Live Query View in BEx Query

(Consumption of SAP HANA Live query views in BW via ODP Transient Provider)

 

Scenario B:
Transactional data provisioning
via HANA Live Reuse View enriched by BW master data
(Consumption of SAP HANA Live reuse views in BW adding BW master data features using Composite Provider)

 

Scenario C:
BEx Query
with key figures based on historical/plan BW data and most current SAP HANA Live data
(Consumption of SAP HANA Live views and BW DataStore Object by CompositeProvider in BW)

 

Scenario D:

Transactional and master data via SAP HANA Live consumed by BW
(Consumption of SAP HANA Live transactional and master data views by Open ODS Views in BW
)

 

Other possible scenarios:

  • Historic BW data virtually accessed via HANA Calculation View which combines HANA Live and BW data (Custom built HANA
    Calculation View combines data from HANA Live view and BW generated HANA views)
  • Loading of data into BW using Reuse Layer of SAP HANA Live as data source (Extract data from HANA Live reuse views into BW)


The following guide gives an overview of the different SAP HANA Live & SAP BW data integration scenarios and shows how to implement them. The focus is on integrating data from the SAP HANA Live schema with corresponding data in BW using the example of ERP – Sales & Distribution.

 

https://scn.sap.com/docs/DOC-55312

What’s the Difference Between a Classic #SAPBW and #BWonHANA?

$
0
0

This is yet another question that I get from all angles, partners, customers but even colleagues. BW has been the spearhead SAP application to run on HANA. Actually, it is also one of the top drivers for HANA revenue. We've created the picture in figure 1 to describe - on a high level - what has happened. I believe that this not only tells a story on BW's evolution but underlines the overall HANA strategy of becoming not only a super-fast DBMS but an overall, compelling and powerful platform.


Fig. 1: High level comparison between a classic BW and the two versions of BW-on-HANA.

Classic BW

Classic BW (7.3ff) follows the classic architecture with a central DBMS server with one or more application servers attached. The latter communicate with the DBMS in SQL via the DBSL layer. Features and functions of BW - the red boxes in the left-most picture of fig. 1 - are (mostly) implemented in ABAP on the application server.

BW 7.3 on HANA

At SAPPHIRE Madrid in November 2011, BW 7.3 was the first version to be released on HANA as a DBMS. There, the focus was (a) to enable HANA as a DBMS underneath BW and (b) to provide a few dedicated and extremely valuable performance improvements by pushing the run-time (!) of certain BW features to the HANA server. The latter is shown in the centre of fig. 1 by moving some of the red boxes from the application server into the HANA server. As the BW features and functions are still parameterised, defined, orchestrated from within the BW code in application server, they are still represented as striped boxes in the application server. Actually, customers and their users do not note a difference in usage other than better performance. Examples are: faster query processing, planning performance (PAK), DSO activation. Frequently, these features have been implemented in HANA using specialised HANA engines (most prominently the calculation and planning engines) or libraries that go well beyond a SQL scope. The latter are core components of the HANA platform and are accessed via proprietary, optimised protocols.

BW 7.4 on HANA

The next step in the evolution of BW has been the 7.4 release on HANA. Beyond additional functions being pushed down into HANA, there has been a number of features (pictured as dark blue boxes in fig. 1) that extent the classic BW scope and allow to do things that were not possible before. The HANA analytic process (e.g. using PAL or R) and the reworked modeling environment with new Eclpise-based UIs that smoothly integrate with (native) HANA modeling UIs andconcepts leading also to a reduced set of infoprovider types that are necessary to create the data warehouse. Especially the latter have triggered comments like

  • "This is not BW."
  • "Unbelievable but BW has been completely renewed."
  • "7.4 doesn't do justice to the product! You should have given it a different name!"

It is especially those dark blue boxes that surprise many, both inside and outside SAP. It is the essence that makes dual approaches, like within the HANA EDW, possible, which, in turn, leads to a simplified environment for a customer.

 

This blog has been cross-published here. You can follow me on Twitter via @tfxz.

BW on HANA - Performance Comparison of Different Exception Aggregations

$
0
0

This article compares the performance of three different ways of doing a simple exception aggregation in a BW on HANA scenario.  The goal is to see what design will give best performance for a BEx query that uses exception aggregation.

 

Introduction

A performance problem can be experienced in BW when a large amount of exception aggregation has to be done at query run-time.  Before BW 7.3, exception aggregation happened on the application server during the OLAP part of query execution.  It was not done on the database layer.  This meant that a potentially large volume of data had to make the journey from the database to the application server.  With BW 7.3 (and BWA 7.2), or with BW on HANA, it became possible to "push down" some of these exception aggregations to the database layer.

 

The performance benefit of pushing down these exception aggregations can be considerable.  This push down is well documented (see chapter 4 of this document) and takes only a switch in RSRT to implement.  In RSRT you make this setting:

rsrt setting.png

By making this setting, the system will attempt to perform the exception aggregation in the database/BWA layer, but depending on query complexity this may not always be possible.

 

But could performance be improved further?

 

If we consider a mixed scenario in a BW on HANA landscape, then we have more options available to us, so perhaps the answer is yes.  With HANA as the database there is the possibility to build custom HANA models and so push down the entire query calculation, leaving the query itself as more or less a "pass through".  This is only possible if the query and the exception aggregation are fairly simple.  As queries get more complex, BEx may be the better option, and in general you don't want to mix complex BEx with complex HANA models in one report.  For the sake of this experiment imagine performance is the only driver and the query has only simple exception aggregation.

 

Data Flow for Test Queries

So let's consider three queries that can be used to give the same exception aggregation output and see which performs best as the number of records increases:

dataflow.png

The different queries/test cases are named like this:

 

1) "Vanilla"

A standard BW cube with a BEx query on top.  Exception aggregation is defined in the query as normal and executed in the OLAP layer at runtime.  I've called this vanilla as it is the default situation.

 

2) "Exc.Aggr."

This is the same BW cube as case 1, with a different query on top.  The query has it's RSRT setting for "Operations in BWA/HANA" = "6 Exception Aggregation".  In this case, exception aggregation is still defined in the BEx query, but it is executed in the database layer (or on BWA if that were the scenario).

 

3) "Custom"

This uses a custom HANA model.  The same BW cube is used as per cases 1 and 2, but here we use a system-generated HANA Analytic View on top of the cube.  Above that, a custom Calculation View is used to do the equivalent of an exception aggregation, in fact all the required report calculation work is done here, not in the query.  A Virtual Cube above that lets BW see the custom HANA model, and lastly a BEx query that does pretty much nothing sits on the very top.

 

Test Scenario

Let's consider a very simple case where the exception aggregation is used to count occurrences.  Consider the scenario where you have measurements of man hours of effort associated with line items on a document.  Perhaps these are measurements of effort on help-desk tickets, or efforts to process financial documents.  Exception aggregation can be used to count documents and so you can get averages at a higher level, for example average man hours of effort per document across different company codes.  Here is some sample data from the cube YYCUB02 (see data flow diagram above):

base data.png

The above sample data is used to give the following report output, where exception aggregation is used to count the number of documents:

report.png

Generating Test Data

To generate the test data, a small data set was flat-filed into the cube, then a self-load was used to generate more random records based on the initial set.  This self-load was then repeated to double up the volumes with each load, with measurements running from 1m to 100m rows.

 

Gathering Test Results

To gather the test results, each query was run with an increasing numbers of records in the cube.  Performance measurements were taken from RSRT (RSRT -> Execute + Debug -> Display Statistics):

rsrt stats.png

These raw performance measurements from RSRT were then grouped into DB and OLAP time using the groupings defined in help page Aggregation of Query Runtime Statistics.  Since RSRT was used in all cases the Frontend time can be regarded as constant and was ignored.

 

Test Results

Comparing the 3 scenarios with increasing records produced these charts:

results.png

The Exc.Aggr. and Custom scenarios both perform much better than the Vanilla scenario, giving a 95% drop in runtime.  We can zoom in to see how these two scenarios are made up, by separating their OLAP time and DB time:

results2.png

The above shows that the OLAP time for both these scenarios is very low, as we'd expect since the query work is not really being done in the database layer.  The difference lies in the DB time, and the Custom model outperforms the Exc.Aggr model.

 

Excursion into Big O Notation

If you've not come across it before, Big O Notation is a way to describe how a process or algorithm behaves when one of it's inputs change.  You'd expect a report to get slower as the number of records it has to process increases, but how much slower could it be?  Big O Notation can be used to describe the broad categories of how much slower something gets.

 

This chart shows the main Big O curves (the below chart stolen from http://apelbaum.wordpress.com/2011/05/05/big-o/):

bigo.png

In the chart above, as the number of records increases on the x-axis, the runtime on the y-axis also changes.  The names of each curve are the Big O Notations.  Looking back at our test results the query execution times can be seen to form a straight line so we can say that are all O(n).  It is true that the Vanilla scenario is a worse O(n) than the other scenarios, in that it's slope is steeper, but they are still all categorised as O(n).

 

Real World Examples of Big O

O(log n) - is what you'd see using an ABAP BINARY SEARCH, for example in a transformation.  O(log n) is a good place to be if volumes get high.

O(n) - is what we see in our query runtime results.

O(n2) and worse - is what you'd see if there is nested loops in a transformation.  You may be familiar with performance being fine in a development box with limited data, and suddenly in a test environment performance becomes very bad.  A nested loop in a transformation can cause this O(n2) pattern.

 

When I first carried out these tests, my results looked like this:

resultserr.png

This looked like the Vanilla case was showing O(log n) but that didn't make any sense!  How could increasing records cause performance to stabalise?  On further investigation this turned out to be an error in my design of the test.  The random number generator was only generating values up to 7 digits, or 10 milion values.  As the number of records ramped up to 100 million, the random generator was tending to repeat the document numbers rather than create new ones.  The amount of processing done on the OLAP layer was then becoming more constant.  Lesson learned - always sanity check the results!

 

Conclusion

Pushing down exception aggregation using the RSRT setting gives a huge improvement for virtually no effort.  In this simple test case, a hand crafted custom HANA model did perform a little better, but that would need weighed against the additional effort to build and maintain.

How to create Composite provider on top of Multiprovider and SPO (Semantic Partitioned Object) in SAP BI7.3

$
0
0


Hi All,

 

Could you please let me know how to create Composite provider on top of Multiprovider and SPO (Semantic Partitioned Object  build based on Info Cube)  in SAP BI7.3.

 

 

 

Regards,

Raghavendra

Viewing all 151 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>