Home > Blog > Blog > 2012 > March
David Hull

Back to the Future

Posted by David Hull Mar 30, 2012

Back in the mid-1990s, I was responsible for supporting SAP R/3 running on another vendor’s relational database. This is back when R/3 was the only client-server product that SAP offered. R/3 was the product that customers ran their businesses on, and anything they needed to do with that data was done directly in the R/3 system. Performance tuning was sport, and it wasn’t always clear whether we were winning.

 

I enjoyed it though, because I geeked out over knowing the most obscure tuning parameters that could only be accessed from CLI interfaces and SQL shells. Were they effective? Sure, to some extent. But when I heard Curt Monash use the term “bottleneck whack-a-mole,” I instantly related, because that’s what we were doing – jumping from one problem to the next, and there was really no end in sight.

 

Needless to say, reporting was slow. Customers complained about this, and rightfully so. To address these issues, SAP, like many other vendors, released a data warehouse called BW. It was designed in its first incarnation to take over the CO-PA and LIS reporting functions from R/3. SAP stated at the time that they would continue to support customers using these R/3 reporting mechanisms, but that new developments would be focused on BW because data warehousing was the future. And everyone else agreed, because they were doing the same things.

 

BW also ran on third-party relational databases, and it brought about a whole new world of performance challenges. To address these issues, most data warehouse vendors (including SAP) introduced “optimizations” such as summarizing your data so that there was less of it to report on; aggregations, or materialized views, which are pre-populated views of your data; pre-calculated queries, OLAP caching; and the list goes on. Basically, anything you could do to the data and the database to make reports run faster was fair game.

 

As a customer though, we noticed that we were getting further and further from the source. With each hop along the way – extracting, transforming, loading, summarizing, aggregating, materializing, caching – the latency of the data increased. Not only that, but it was not surprising how many times I saw smart people realize that the summarizations and translations they had implemented inadvertently changed the meaning of the data such that it didn’t reflect what was actually in the source system, and knew that they had to start over again.

 

And yet still, performance suffered, because BW stored data in a relational database, and relational databases just didn’t do a great job anticipating how users would want to manipulate and view their data. And relational databases were not built for reporting, so if you couldn’t anticipate it, you couldn’t tune for it.

 

SAP then came out with BWA, which was an in-memory query accelerator. The idea was that, even after all those steps were taken to make queries perform, you could then load your data into memory, which would dramatically speed up your queries. And speed up queries it did, but although BWA was great at increasing performance for some reporting scenarios, id did not address all of the fundamental performance problems inherent in BW running on an RDBMS.

 

And this pretty much brings us up to where we are today. Many SAP ERP customers run SAP BW for reporting, and many of those are using BWA. Everyone agrees, things could be better, but how?

 

Other vendors would tell you that the answer to improving performance is (no surprise here) … more layers of caching and acceleration!

 

I’m sure you’re familiar with the choice of appliances or engineered solutions offered by Other database vendors to address performance issues. Look under the covers, and you’ll find they’re sticking with the models that we’ve had for decades now, but adding incrementally to areas where they can cache. Whether using SSD for data caching, or spreading your load across additional database servers, or storage servers, or opting for yet another in-memory accelerator for analytics, they’re all about the same things SAP customers have been doing for years.

 

Do they work? Sure, to some degree. There are certainly benefits to these methods. There are almost always positive results to identifying bottlenecks, and coming up with ways of addressing those bottlenecks with point solutions. But that’s really how we wound up here in the first place – incrementally “fixing” issues with band-aids, workarounds or duct tape.

 

It’s sort of like a rumor mill, isn’t it? As the old adage goes, a story makes it’s way from person to person to person until eventually it doesn’t even resemble it’s original intent. The same thing applies to data management with analytical systems in fact – it’s remarkable how often the end result doesn’t resemble the original data. All of these hops that data must take are points at which the data can be changed, latency added, users frustrated; all because our RDBMS in the beginning didn’t perform well enough.

 

Well, enough already.

 

SAP’s vision for providing a solution was to say stop the insanity! Many of the existing solutions on the market have not been “fixes,” they’ve been workarounds, only delaying the problem from returning when business volume increases again. It’s time to fundamentally re-think the solution instead of incrementally adding to the problem. It’s time to use innovative thinking to take advantage of the latest advances in computing technology – processors, memory and networking technologies – and come up with new, compelling solutions to address customer challenges.

 

And that’s where SAP HANA comes in. HANA was designed from the ground up to run entirely in memory across clusters of affordable servers, and it can very effectively handle analytics directly on your transactional data. That’s right, similar to where we started from, let’s keep your data in your transactional system, but provide a platform that can handle analytical requirements directly on this data and still perform well. In fact, perform much better than our myriad layers do today.

 

Latency becomes zero because data doesn’t have to move from one database to another, and no time is required for aggregations, summarizations, materializations, etc.

 

No more making business decisions based on rumors because the data hasn’t needed to be transformed.

 

And you save money because you don’t need twelve additional layers of hardware and caching just to allow you to understand what your data means.

 

It seems simple, logical, and sensible, doesn’t it? Shouldn’t it be?

 

It is!

 

Not only is it that simple, customers are taking advantage of this today. With over 250 HANA customers to date, many of them in production, it is a real solution that provides real benefits for real-time business results. And by Sapphire, you will be able to replace your RDBMS database running under SAP BW with SAP HANA, and begin enjoying the benefits of going back to the Future.

If I could sum up “success” in the consumer packaged goods, or CPG, industry in one word, that word would be SPEED — the speed at which the best companies take advantage of the latest consumer trends and defend their market share against the competition. Quickly changing consumer tastes are just one example. Coffee is good for you, then it’s not, then it is again. Consumers want zero trans-fat foods, no-fat cookies and no-sugar sweets, and then decide to forget it and go back to the “good stuff.” On top of that, companies operate in a fierce marketplace with the need to monitor and respond to competitive promotions and pricing changes, and constantly deliver new product innovations and product line extensions. Few industries exist in such a dynamic environment.


To succeed in any business you must have a good product. But in consumer packaged goods, it is absolutely vital you keep a close eye on consumers and the competition, as well as your own business, to stay on top of your results today and ensure you are positioned to win the battle for the consumer’s business tomorrow, next week and next month.

Many people have been talking about the “velocity” aspect of “Big Data” and how it might help businesses, but if you are wondering who is actually doing it, look to consumer packaged goods companies. They are early innovators who have jumped on the opportunity to use Big Data and analytics tools to accelerate the speed of their businesses.


The War in the Grocery Store

Big Data — and specifically the ability to analyze vast amounts of data instantly — is a key weapon in helping consumer goods companies win the battle for retail shelf space and ultimately customer sales. Accurate sales forecasts from distributors and retailers, as well as actual point-of-sale (POS) data, are critical in helping determine production volumes and distribution, and fine-tuning pricing and promotion strategy. Consumers can be very price-sensitive and switching costs are virtually zero. If there is incentive for a consumer to switch to another brand — because your price is too high or your product is not in stock — the consumer may discover they prefer the other brand and continue to buy that brand in the future, resulting in lost sales for months, if not years, into the future. The better informed a CPG company is with immediate and up-to-date POS information from every retail outlet, the better positioned they are to crank up production to avoid stock-outs or run additional promotions or incentives in regions with retail chains where volumes are dropping. In addition, a company may find itself in that luxurious situation where it can increase prices and/or reallocate production to improve profit margins in areas where customers are placing strong demand on their product and sales are doing well.


One company that is already delivering unheard of “speed of thought” insight is Nongfu Spring, the largest bottled water company in China. Nongfu used to take two days to collect and produce reports on a very large volume of POS data from all of its retailers. These reports were then used by executives to make their key business decisions. Unfortunately two days is a lifetime in this very competitive market, and Nongfu was looking for a way to provide reports on their business at the speed of their business – essentially up-to-date analytics on every transaction as it is happening. By working with SAP and leveraging new innovations like SAP HANA, Nongfu was able to dramatically reduce the lag time in producing insight on the retail data they collected. Preparing and loading the data that used to take a day is now happening in real time. Reports and queries run 200 to 300 times faster. For example, one business process that took 24 hours to complete is now are available in 37 seconds.

Nongfu’s executives and sales force now have the immediate insight to make smarter decisions in response to the dynamics of the market. This has allowed them to do a much better job of serving their retailers with the right inventory, and their consumers can be assured of finding their favorite products on the shelf. To learn more about Nongfu, check out this video.

Colgate-Palmolive has also been navigating the fast-paced CPG market, in its case for more than 200 years and now operating in more than 200 countries. And like Nongfu, Colgate-Palmolive has seen an opportunity to partner with SAP to redefine the speed and accuracy by which it can provide its business users better analytics based on very large volumes of data. A good example where these solutions could improve results and customer satisfaction is helping Colgate-Palmolive collaborate with its retailers to do more effective promotion planning. Recently, by leveraging SAP HANA, Colgate-Palmolive was able to improve processing times on key reports from 77 minutes to 13 seconds, putting sales and profitability information into the hands of sales executives far faster than before. Now, when salespeople go into meetings with retailers, they have up-to-the-minute information on how Colgate-Palmolive products are faring in those stores, and they can adjust pricing and promotions to respond to the demands of their retailers and customers while also ensuring they are tracking to their internal business goals and financial metrics. To learn more about Colgate-Palmolive, you can watch a video here.


Vision, Powered by Innovation, Reaps the Biggest Rewards
So how did Nongfu and Colgate-Palmolive make these significant shifts in speed? Technology such as in-memory computing and analytics software certainly plays an important role. But so does the vision of their executives in understanding how to apply these innovations to the business and to prioritize what business areas will reap the biggest rewards from the investment in these new solutions. Furthermore, once you are in a position to provide the line-of-business organizations throughout the company with these breakthroughs – like instant access to accurate sales data – then business leaders must also be prepared to empower the end users to explore data, synthesize information and make decisions such as distribution or production scheduling to dramatically increase the speed and effectiveness of the business. In this way, Big Data can potentially change not only the technology landscape of a company but also its business model, and ultimately its results.


This blog is also posted on Forbes.

In the last year I have been talking to many customers about HANA and deploying analytics on iPad and Android devices. Some executives have taken a look into HANA and we are assisting to define visions and business cases for HANA deployments. Other executives are very excited about HANA and its potential, and are asking a lot of questions before broaching the HANA topic with their executives. I started collecting all the questions and thought it best to share some of the questions: It's a HANA 101 document for customers that start thinking HANA.

 

A lot of our discussions often get sidetracked on what "SAP In Memory HANA' blog on LinkedIn, is - Will HANA replace BW? Other questions range from what exactly is big data? Why do I need HANA? What are the business reasons for thinking about HANA? When should a company start to think moving to HANA? Is HANA bleeding edge technology as I don’t want to get burned with new concepts? And a whole lot of others. So what I have done is commenced authoring a 4 part series that I am titling as “Tips & Tricks for a NEW HANA Installation” and my intention here is to take the reader through a logical path of discovery by answering some of the common questions I face on a weekly basis.

 

On the big data question, When is my data size too big to handle? I'll give you a short story. Last year a SAP colleague of mine from Palo Alto requested that I must meet someone from Facebook. In that meeting I learned what ‘Big-Data’ truly means to companies like Facebook, Google and Twitter. While it was humbling on one side, it was also truly eye opening to find that companies out there are not struggling with 50 and 100 terabyte data warehouses, they dwell in the 1 to 5 Exabyte data containers. I realized that the game changes completely at these volumes and talking to these folks was truly an experience on query optimization with very large data records. What I've learned there makes our HANA deployments a little simpler. It also points right back to ensuring that we understand the business vision and expectations prior to deploying the HANA solution.

 

So the next time you are worried that you have 40 or 200 billion records don’t think you are living at the edge of the ‘big-data’ world, but just the edge of your own perception. See the first part for the HANA Tips & Tricks for a new HANA implementation attached.

 

Find more:

'SAP In Memory HANA' Group in LinkedIn: http://spr.ly/LinkedInSAPHANA

SDN 'Tips & Tricks for a New HANA Initative':  https://cw.sdn.sap.com/cw/docs/DOC-148065?uploadSuccess=true

BI Strategy Compliance Poll: http://lnkd.in/ga8tZH

Hello everybody,

BW powered by HANA  has now been in Ramp-Up for over 20 weeks and the results coming back from many of these early adopters has been very favorable. In fact our customers have experienced a performance boost for their BW Enterprise Data Warehouse in the areas of data loads, query performance and planning capabilities. They’ve also seen a  simplified and faster data modeling and remodeling.

 

In this blog I would like to take the opportunity to summarize where the major areas of performance improvements with SAP NetWeaver BW on HANA and  give some examples from Ramp-Up customers (RU)  or Proof of Concept customers (POC).

 

Faster activation of data via HANA optimized DataStore Objects

DataStore objects are used to create consistent delta information  from various sources. In the traditional RDBMS based architecture, the delta calculation is performed by the application server and requires data reads from the RDBMS. Roundtrips to the application server are needed. In the BW on HANA based approach, the delta calculation is completely performed in the HANA In-Memory DB and roundtrips to the application server are no longer required. Based on this architectural change, we measured up to 10 times faster activation in our labs. 

These excellent results were also confirmed by the majority of our POC and Ramp Up customers. Examples from these customers: a 5-12 times faster activation (average) and in some scenarios, greater than 30 times faster activation.

 

Faster data loads to HANA optimized InfoCubes

Traditional InfoCubes are tailored towards an RDBMS and consist of two fact tables (f-Table with all the details and e-table with compressed data) and the related dimension tables. HANA optimized InfoCubes represent flat structures without dimension tables and e-tables. Therefore we can provide ~ 5 times faster loads and an extremely simplified data modeling and data remodeling. For example, one of our RU customers in the Consumer Products industry showed a 5 – 7 time faster load to HANA optimized InfoCubes. Great results also from a POC customer on the change management performance after structural changes (adding, deleting fields):   Before moving BW on HANA, the process of structural changes plus the related data realignment and rebuilding of BWA indexes took around 7 hours. With BW on HANA this was reduced to less than one minute.

 

Excellent Query performance

With BW on top of HANA we can provide fast query access to all DataStore Objects and InfoCubes as all are based on In-memory data and columnar storage. Therefore we can access much faster the data as compared to the traditional RDBMS approach. In addition to this we get a further boost for the query performance as OLAP calculations are pushed down to the HANA In-memory calculation engine. This is supported for InfoCubes as well as DSOs (with SID generation turned on) and enables the same kind of excellent query performance when we do reporting on InfoCubes or DSOs.

Overall the query performance results reported back from our early adopters of BW on HANA correspond to the BWA query performance e.g. ~ 10-100 times faster in comparison to a RDBMS. In addition to this there is no replication of data – indexes on InfoCubes and InfoObjects are no longer required. Examples from RU customers have shown 20-30 times faster queries. In fact, in one example, a RU customer saw an increase of 70-100 times and in another, greater than 400 times faster. This supercharged performance enabled our customers to run scenarios they could not otherwise do before.

 

BW Integrated Planning powered by HANA

Traditional Planning runs all the planning functions in the Application Server and requires data reads from the Database Server. In-memory Planning runs all planning functions like aggregate, disaggregate, conversions etc. in the SAP HANA platform and by this can run up to 10 times faster planning functions. Our early adopter RU customers have just started leveraging the In-Memory planning functions and I am confident that  we will be able to  provide more details on this  at a later point of time.

 

Overall, the  early results for our many BW on HANA early adopters have shown  very positive feedback on the process of migrating to BW on HANA and the related performance boost for BW.

According to industry analysts, most organizations are ill-prepared to address the technical or management challenges posed by big data. As a direct result, few will be able to effectively exploit this trend for competitive advantage.

 

For SAP customers this finding is nothing to be afraid of! The average company can immediately start to enlarge their competitive advantage by easily being able to manage and analyze their huge amounts of valuable data. How they can do this? By leveraging the power of SAP HANA with easy to deploy solutions.

 

A lot of companies have tons of data in their ERP system, which documents and helps processing their business. As the business grows and the time goes, the data is mounting.

 

They need to increase their efforts in managing these huge amounts of data – they start aggregating their data, extracting them into Business Warehousing Solutions, accepting that data analysis is done on non-up-to-date data and that even closing processes slow down.

 

What if you could change this in a matter of weeks? SAP just released new rapid-deployment solutions for ERP which takes advantage of SAP HANA. As all rapid deployment solutions, they are fixed-scope solutions with a fixed-price service offering that can be implemented with low risk within a few weeks.

 

Currently there are three solutions available:

 

If you have huge amounts of data in your company, these solutions will change your ERP world immediately. They will for example:

  • Facilitate reporting on real-time data
  • Allow end-users to do flexible analysis
  • Enable a deeper insight into ERP data as data aggregation can be avoided
  • Accelerate your month-end closing processes.

 

In the next blogs, I will give you more insight into each of these HANA-based rapid-deployment solutions for SAP ERP.

As an international fragrance, cosmetics, and fashion company based in Barcelona, Spain and a global trendsetter in translating fashion brands’ images into the world of perfume, it is critical for PUIG to be able to rapidly and easily analyze and translate the huge volumes of data their organization compiles to hear what their customers and partners are saying and reflect that feedback into their products and services.

 

For some time PUIG has been looking for the right combination of technology that would provide them the ability to tap into this data as transactions unfold real-time and have the most up-to-the-minute analysis of that information. Their wait has ended.

 

We’re delighted to report that our HANA Strategic Technology Partner IBM has recently announced that PUIG is their 1st customer in Spain to go live with Business Warehouse on SAP HANA on the IBM platform and they are thrilled with the impressive results.

 

Achieved in collaboration with PUIG, SAP, Accenture and IBM, this project represents an important milestone that not only allows PUIG to run their data queries much faster, but also allows them to integrate new data sources into their analytics which previously could not be loaded due to system performance issues. PUIG is now seeing performance improvements of up to 500X, coupled with outstanding compression rates versus their previous SAP BW implementation. With the maximum performance that SAP HANA provides, completely new scenarios have been opened up for PUIG’s Analytics Team, who can focus even more on new analytical views to increase the added value that BW on SAP HANA provides to the company’s management.

 

These results in performance, scalability and high availability at PUIG have validated what IBM is seeing on a daily basis from many of their same customers who are deploying joint infrastructure solutions for SAP HANA.

Authored by Michaela Degbeon


In this blog I will provide a brief summary of  a new capabilities in BW 7.3 called BW Workspaces which is a data modeling environment for the business user.

 

A BW Workspace is a kind of highly flexible but safe place for BW Data Modeling. It is maintained and controlled by IT and can be used by local departments to quickly react on new and changing business requirements. A BW Workspace is tightly connected to the powerful EDW capabilities of integrated and consolidated data. Plus it offers at the same time a rich environment for creating in a straightforward way ad-hoc data models. These can be models that connect central EDW data with local or external data, like flat file content. Furthermore the whole thing runs on BW powered by SAP HANA. Thus BW Workspaces perfectly combine integration, independence and speed . The HANA power enables super-fast data combinations and query execution of the BW Workspace data models since the processing is directly executed on the HANA engine. And BW Workspaces run completely in-memory.

 

Want to know more? 

There are 3 easy steps in understanding a simple BW Workspace’s end-to-end scenario.

  1. Firstly a BW Workspace needs to be defined in the BW backend – a task that is normally done by a central IT team. The definition allows to control seize, memory consumption and authorized access. In addition it is defined what part of EDW data can be consumed by the BW Workspace: Sales data, product data or whatever central data is intended to be used for further modeling. Good to know: the data itself is not copied from the central EDW to the BW Workspace but logically exposed and no redundant data replication is necessary.
  2. Secondly the BW Workspace modeling and enrichment is done via the BW Workspace Designer tool. Key user can plug in external local data from e.g. flat files to the central data. Easily LOB or department-specific data models can be crafted in a smooth and intuitive manner. Technically you combine local data models with central models via JOIN and UNION operations and build CompositeProvider definitions that are processed completely in-memory. Users are guided step by step via a Wizard through the procedure of merging central and local models, defining the CompositeProvider and creating queries.The Workspace Designer itself runs in a browser and no extra software needs to be installed on the office PC.
  3. Lastly, regarding data consumption, there are multiple options available: you can go with the SAP Business Objects Analysis Clients or with the SAP BEx Query Designer. Furthermore any BEx Queries on top of a CompositeProvider can be consumed via the standard BW interfaces MDX and BICS using all certified BI clients.


Have you become even more curious and want to try it out by yourself? You can find further details and practice in the following link  : http://scn.sap.com

It’s now ten months since we showed SAP HANA live on the SAPPHIRE NOW show floor for the first time. Our test drive area at the 2011 Orlando conference showcased BW on HANA, new accelerated applications, and the winning application ideas from our HANA InnoJam competition. Those of you who joined us at the event saw HANA take its first steps. If you join us in 2012 you will see HANA take great strides.

 

Customers

The highlight for HANA at the 2012 event will be customers.  HANA went into general availability about four weeks after last year’s event. This year we are inviting many of our first customers to speak about their experience with HANA and how it is impacting their business.

 

Expanded Presence

The biggest change for this year’s conference is that HANA is being showcased across the entire show floor. You will see industry, line of business and mobile applications running on HANA. We will be discussing HANA and the cloud. We will be showing many analytic aspects of HANA such as its role in data warehousing, BI and analytic applications. And, as we did at SAP TechEd and SAPPHIRE NOW Madrid, we will again showcase partner hardware, discuss the technology, and show innovation on the HANA platform in our Database & Technology campus.

 

Conversations and Content

We are also expanding the business conversation around HANA. For example, we (and our customers) will be talking about how HANA can not only save you from Big Data, but also allow you to benefit from it. We will showcase the role of HANA in creating and running a real time business. And, as in 2011, we will be bringing our A players to present at both SAPPHIRE NOW and the co-located ASUG annual conference. Expect dozens of presentations, panel discussions and microforum sessions to feature HANA experts from around the world.

 

Whichever aspect of HANA you are interested in, please join us. You can find event information as well as a registration page at www.sapandasug.com. And if there are other HANA topics you would really like us to cover at the conference please suggest them by commenting below.

 

See you in Orlando!

What a great day! A day that was made possible by some Silicon Valley startups and a small but ambitious team of cross-functional SAP employees. With more than 26 startup companies in attendance, VC members from SAP and other funds and a few members of the Press—the day was a fast paced exploration of innovations around Big Data. The solutions varied as much as the presenters and their styles. The “ShamWOW” presenter award, if there was one, would have been given to the CEO, CTO & Founder of LongJump, Pankaj Malviya —who delivered his presentation with infomercial pace, tension and passion. I am teasing him, but I loved it—he nailed his overview and the unique nature of LongJump.

 

The day was filled with presentations, from the startups and SAP. Aiaz Kazi @aiazkazi presented to the crowd – “How to engage SAP Marketing.” He never said it would be easy, but he did commit himself, Amit Sinha @tweetsinha and myself @leatherman_SAP to making it as easy as possible. Vishal @vsikka also restated SAP’s policy that HANA is for everyone—friend or competitor. In general, there was a solid focus on making HANA available as a development platform for startups. Nino Marakovic from SAP Ventures expressed his interest—because of HANA—in investing in earlier rounds of startups. It was an interesting day all around.

 

As you might know, SAP is celebrating its 40th anniversary this year and I think it still has the passion of a startup. That passion was on full display when around 300 employees showed up to meet the startups at the networking event. Each employee was given 6 million Euros to “fund” the startups they thought were the best. 300+ VC’s roaming around a room with funds in hand is the fodder for many CEO’s dreams—sadly the notes had no cash value. It was a fast and furious session for the startups to engage the SAP employees, the startup CEO’s and CIO’s were bombarded with questions from the very technical to how many twitter followers they had. Congratulations to Zettaset for winning the “Most Likely to Succeed” prize, awarded by the SAP employees during the networking session.

 

The Big Data event was our first SAP Startup Forum, with the intent of exposing SAP to the most innovative and disruptive startup companies in Silicon Valley. The next forum will be held June 8, 2012, on the topics of mobile and big data analytics.

 

A special thanks to Kaustav Mitra for taking an idea to execution with amazing results, in a timeframe that is worthy of Jerry Reed’s “East Bound And Down” http://www.youtube.com/watch?v=y0F2aYivCmU —or as we have come to call it “HANA Time.”

I’d like to discuss the prerequisites that are required to be able to move to SAP NetWeaver BW Powered by SAP HANA. There are two main areas that we need to look into – the SAP NetWeaver BW side and the SAP HANA database side. Let’s start with what you need on the BW side to get ready to move to SAP NetWeaver BW Powered by SAP HANA.


The minimum release required for SAP NetWeaver BW is SAP NetWeaver BW 7.3 SP5. Customers that are in ramp-up now are mostly already using SP6. This support pack, however, is only available for ramp-up customers. So if you plan to wait for general availability (GA) which will come in Q2 2012 you can also wait for SP7 which should become available at the time of GA.Furthermore, the upgrade to SAP NetWeaver BW 7.3 - independently of a SAP HANA migration - already requires to switch from the old role based BW authorizations to the new analysis authorizations that were introduced with SAP NetWeaver BW 7.0. Even though it was possible to use the old authorization concept with the BW 7.0 version this is not possible anymore. You need to migrate to analysis authorizations before the upgrade to SAP NetWeaver BW 7.3. BW 7.3 doesn’t require to have a Unicode system. However, if you want to use SAP HANA as the underlying database of your BW system you need to have a SAP NetWeaver BW Unicode system. The good thing is though that you don’t need to convert to Unicode before the database migration. During the migration of your currently used database like Oracle DB, IBM DB2 etc. to SAP HANA you can easily perform the Unicode conversion as well.


Keep in mind that your SAP NetWeaver BW system needs to be installed on a separate application server than the SAP HANA database server. Furthermore, the SAP NetWeaver ABAP stack where the BW system is running on has to be separated from the SAP NetWeaver Java stack. Dual stack installations are not supported anymore with SAP NetWeaver BW running on HANA. In case you use a BW Accelerator today it will become obsolete after migration to BW on HANA. Also, aggregated tables will be eliminated with the SAP HANA database migration as the entire system is stored in memory now.
The conversion of InfoCubes and DataStore Objects (DSOs) to HANA Optimized InfoCubes and DSOs is a separate optional step after the migration of your SAP NetWeaver BW system to SAP NetWeaver BW Powered by SAP HANA is done. After the database migration everything from a BW perspective works the same way as before. However, all the data is stored in memory in the underlying SAP HANA database and thus reporting and data access becomes incredibly fast. If you want to take even more advantage of the in-memory capabilities like the HANA optimized data structures introduced with HANA Optimized InfoCubes and DSOs you can convert your InfoCubes and DSOs to those new types of InfoProviders after you are done with the technical SAP HANA database migration. There will be another blog coming in the next weeks discussing this more in detail. The important thing to mention from a migration perspective is that you need to migrate any 3.x dataflow to a 7.x dataflow in case you intend to migrate corresponding InfoCubes or DSOs to HANA Optimized InfoCubes or DSOs respectively. If an InfoCube or DSO in your SAP NetWeaver BW Powered by SAP HANA system still uses a 3.x dataflow like transfer and update rules you will not be able to convert this InfoProvider to a HANA Optimized InfoProvider.


Let’s now quickly look into the prerequisites of the technical database migration part of your SAP NetWeaver BW Powered by SAP HANA migration. The SAP HANA release required for this migration is SAP HANA 1.0 SP3. Make sure you order SAP HANA hardware only from our certified hardware partners and that the sizing is done accordingly to your environment. You cab find all SAP certified HANA hardware configurations in the product availability matrix available on the service portal. All SAP certified hardware partner will provide you the SAP HANA hardware with an already installed SAP HANA database system.
Keep in mind that the database migration of an existing SAP NetWeaver BW system to BW Powered by HANA must be performed by a certified OS/DB migration consultant for full supportability.


Last but not least you should also evaluate all SAP NetWeaver BW Add-Ons that you intend to use after the migration. Not all Add-Ons for BW are yet supported for BW Powered by SAP HANA even though they might be supported for SAP NetWeaver BW 7.3 running on a relational database.

Today's organisations are mired with the myth that planning is a process that needs to happen on a periodic basis. There are examples when this is really relevant - for example certain types of financial planning. It may be necessary to create a financial plan for the year which is reported as a forecast to shareholders and the wider market, if you are a publicly listed company. This is normally updated to the market on a quarterly process - allowing companies to push for sales at quarter end and report earnings.

 

This does however drive some interesting behaviours. For instance many companies discount heavily at quarter-end and even more heavily at year-end. This drives customer behaviour and they learn to spend during these high-discount periods. This in turn drives the demand for heavier discounts and skews earnings to these periods - tying vendors more heavily into the cycles of periodic planning and reporting. Imagine having to report the position one month before year end! Taking this into the supply and distribution - how do companies keep stock levels low on the basis of forecast sales - and of what types of product - and of course, what part numbers. You want to ask questions like - what are the SKUs which I am most likely to run out of next month? What is my fulfilment risk?

 

If we take this to the business I work in - professional services - we have a similar challenge. We have a financial plan which is set at the beginning of the year and - sometimes - updated during the year. In addition to this we have sales forecasts which are based on a weighted sales pipeline. We have a resource forecast based on sold work and nearly-sold work including holidays. In addition we have actuals of sales, revenue and resource usage and absence. I want to ask questions like: if we sell these key projects, what will the impact on revenue and recruitment be? What are the areas for which I don't have a mid-term pipeline to fill current expected resource demand? Where are the areas of resource in which I have least availability?

 

What both of these scenarios have in common is that the revenue and risk profile changes very quickly. A single win, or loss, supplier going out of business, volcano erupting or shipment failure can dramatically change supply or demand and require a decision to be made immediately. And what's more legislative demands in certain industries can mean that risk exposure may have to be reported much more quickly. It is likely that the global banks, for example, will need to show risk exposure in real-time and this may have a knock-on effect up their supply chain, as they look to understand risky suppliers.

 

So what does all this have to do with technology? Well SAP BusinessObjects Planning and Consolidation (BPC) allows organisations to plan scenarios like financial planning and to consolidate group accounts. But this is just the tip of the iceberg of what the software platform is capable of and very soon, BPC will run on the SAP HANA platform. This has two very important technology benefits that relate to the scenarios above:

 

1) Real-time planning

 

Performance improvements mean that we can reasonably expect planning to be able to happen in real-time. This isn't relevant to all planning scenarios because some are intended to be periodic, but many planning cycles are born out the need to push planning scenarios down through a business. And each time this happens, friction is introduced into the process. With BPC on HANA, plans can be immediately disaggregated - dramatically reducing planning process friction.

 

In addition, we can expect to be able to report on plan versions vs actuals in realtime - meaning that my resourcing example - even with thousands of employees, line item detail about absence and sales pipelines - can be reported in realtime. So when we win a deal, place a resource, or whatever else - the current position can be reported immediately.

 

2) Detail planning

 

In other scenarios, we currently don't plan to a detail level because of performance problems. There are reasons of course why you might not want a detailed plan but there are scenarios where that might be really important. Supply Chain Optimisation projects focus on aspects of this but suppose once more you could link the sales pipeline to SKU level planning. Also suppose that your suppliers sometimes cancel shipments and this can lead to SKUs which are then unable to be used - or worse - products which are built too early and then lie in warehouses, unused and unsaleable.

 

It would be possible to place a risk analysis on each item in the sales pipeline for left-over stock - and in addition to that, know exactly when you had to commence ordering through your supply chain and production to meet customer demand. All the time whilst showing a real-time position of forecast revenue and profit.

 

Moving to the board

 

So let's take this up a level. Currently most board members receive a monthly information pack on which they make strategic decisions. But my experience of most board members is they are focussed on overall strategy plus one or two operational elements. My belief is that the next-generation board-level director will expect to have both operational and strategic information at their fingertips, and they will expect it to be to-the-minute.

 

I have no doubt they are interested in sales and profitability trend-analysis and performance against KPIs - in order to make decisions on strategic direction. But any CFO is interested in their current cash risk position. Any CEO is interested in the biggest risk deals. And those people differentiating in the market will be those that allow their board to make fast decisions based on current positions.

 

And this is exactly what BPC on HANA is designed to do.

Let me shortly introduce myself – my name is Susanne Knopp, I am the rollout lead for SAP HANA rapid-deployment solutions (RDS) and therefore I have an early knowledge about the capabilities of new solutions. My task is to enable our internal SAP colleagues and SAP partners with knowledge about our rapid-deployment solutions so that they are capable of doing the greatest job possible.

 

One of the tasks on my agenda is therefore to get people started on their way of becoming experts. My goal in writing this blog is to answer questions that you might have:

 

  • What is a rapid-deployment solution?
  • Why is SAP delivering new rapid-deployment solutions in the context of SAP HANA?
  • What can I expect from the rapid-deployment solutions leveraging the power of SAP HANA?

 

The first question has already been answered several times and to which I could not have made a better response myself. Have a look at the blog SAP Rapid Deployment Solutions – The Basics. So let us focus on the second question: “Why is SAP delivering new rapid-deployment solutions in the context of SAP HANA?”

 

One of the key attributes of SAP HANA is to deliver more speed. And with rapid-deployment solutions for SAP HANA we are delivering the power of SAP HANA with a “high speed implementation.” We want the customers to receive high-impact business value and to profit from the new technology, but through shorter implementation projects that bring visibility around how much it will cost, and when it will be deployed. A rapid-deployment solution implementation typically takes no longer than 12 weeks – for solutions powered by SAP HANA (e.g. accelerators) you will find that in most cases it takes between 3-6 weeks. Maybe this does not sound like an enterprise IT solution implementation – but it is!

 

So what can someone expect from the rapid-deployment solution leveraging the power of SAP HANA? – You will find we are delivering the full advantage of HANA platform, ready-to-use business scenarios and detailed solution documentation. All of this you will get within a tremendously short implementation timeframe with low risk and predictable costs.

 

A significant portfolio of rapid-deployment solutions with SAP HANA is already available. This includes:

 

  • SAP ERP rapid-deployment solution for profitability with SAP HANA
  • SAP ERP for accelerated finance and controlling with SAP HANA
  • SAP ERP rapid-deployment solution for operational reporting with SAP HANA
  • SAP rapid-deployment solution for pipeline analysis with SAP HANA
  • Rapid Database Migration of SAP NetWeaver Business Warehouse to SAP HANA
  • SAP CRM rapid deployment solution for analytics with SAP HANA
  • SAP Demand Signal Management rapid-deployment solution

 

SAP is planning to release more RDS HANA solutions every quarter.

 

I hope this gave you a short overview on rapid-deployment solutions with SAP HANA. In the coming weeks, I hope to provide you with further details on these available solutions. In the meantime, visit the SAP Rapid Deployment Solutions pages to begin exploring the solutions yourself.

I will discuss the process for converting a SAP NetWeaver BW System to an SAP NetWeaver BW System Powered by SAP HANA in this blog. The main elements covered are:

  • The primary concepts in conversion process to SAP HANA.
  • What SAP HANA release is required to run SAP NetWeaver BW on Hana?
  • What are the requirements for SAP NetWeaver BW to Run on HANA?

 

The Primary Concepts in the Conversion Process to SAP HANA

When evaluating a conversion of an SAP NetWeaver BW system to SAP HANA, there are some defined concepts that describe how this is accomplished. The conversion process is called an Operating System/Database Migration (OS/DB Migration). This process uses standard SAP tools that have been time tested to be reliable. In basic terms, this process allows an SAP system to be migrated from either one Operating System or Database System to another respective system. The process includes preparing a new instance on the target platform, and export of the source system, and then an import into the target system. There are several steps that have to be completed both in preparation and completion of the migration.

** Important NOTE: All productive system migration must be conducted by an individual that is OS/DB migration certified. **

 

When evaluating SAP NetWeaver BW 7.30 running on SAP HANA, consider HANA as a standard database in regards to basic functionality. For regular database operations, this is the best way to understand how SAP HANA related to SAP BW. However, SAP HANA has far more capabilities when compared to a standard relational database. In the SAP HANA system, operations can be passed to the MPP resources in the SAP HANA System where there is no application logic in other relational Databases.

 

What SAP HANA release is required to run SAP NetWeaver BW on Hana?

In order to run SAP NetWeaver BW on SAP HANA, the SAP HANA system must be on SAP HANA release 1.0 SPS3 or greater. The functionality to support SAP NetWeaver BW is not available in SAP HANA until this release. Currently there are certain restrictions in production as to what concurrent systems can run on the same SAP HANA instance. Currently only one (1) application (e.g. SAP NetWeaver BW, SAP Smart Meter Analytics, etc.) and SAP HANA Datamarts are supported in the productive system. The primary reason surrounds lifecycle management topics related to resource allocation/commitment and possible disruptive software patch applications.

 

What are the requirements for SAP NetWeaver BW to Run on HANA?

There are several system requirements in order to conduct an SAP BW system migration to SAP HANA. The following items must be in place for system migration:

 

  1. SAP NetWeaver BW must be upgraded to SP05 or greater (SP06 is available to Ramp-Up Customers upon request).
  2. SAP NetWeaver BW system must be a Unicode System. The Unicode conversion can be completed as part of the migration to SAP HANA as long as the project work has been completed.
  3. SAP NetWeaver BW 7.30 no longer supports Role Based Authorization concepts. In order to Run SAP NetWeaver BW Release 7.30, all authorizations must be upgraded the Analytical Authorization concept that was delivered as part of SAP NetWeaver 7.0.
  4. Customers need to evaluate their source system level and the add-on applications that have been installed as part of BW 7.0. Not all Add-Ons are support on SAP NetWeaver BW 7.30. Even the Add-ons that are support on BW 7.30 are not necessarily supported and available yet on SAP NetWeaver BW Powered by SAP HANA.
  5. Ensure that hardware procured for the SAP HANA system is certified and from a certified partner and sized accordingly for your environment
  6. Review and understand all additional elements are required for pre and post migration.

 

This was my high level overview of what is required from a technical aspect to perform a conversion from a traditional relational database to the In-Memory capabilities of SAP HANA. Read our upcoming blogs around topics the post migration stage that further enhance and optimize the system.

Introduction

A long time ago when I first started blogging on SDN, I used to write frequently in the style of a developer journal. I was working for a customer and therefore able to just share my experiences as I worked on projects and learned new techniques. My goal with this series of blog postings is to return to that style but with a new focus on a journey to explore the new and exciting world of SAP HANA.

 

At the beginning of the year, I moved to the SAP HANA Product Management team and I am responsible for the developer persona for SAP HANA. In particular I focus on tools and techniques developers will need for the upcoming wave of transactional style applications for SAP HANA.

I come from an ABAP developer background having worked primarily on ERP; therefore my first impressions are to draw correlations back to what I understand from the ABAP development environment and to begin to analyze how development with HANA changes so many of the assumptions and approaches that ABAP developers have.

 

Transition Closer to the Database

My first thought after a few days working with SAP HANA is that I needed to seriously brush up on my SQL skills. Of course I have plenty of experience with SQL, but as an ABAP developer we tend to shy away from deeper aspects of SQL in favor of processing the data on the application server in ABAP. For ABAP developers reading this, when was the last time you used a sub-query or even a join in ABAP? Or even a select sum? As ABAP developers, we are taught from early on to abstract the database as much as possible and we tend to trust the processing on the application server where we have total control instead of the “black box” of the dbms. This situation has only been compounded in recent years as we have a larger number of tools in ABAP which will generate the SQL for us.

 

This approach has served ABAP developers well for many years. Let’s take the typical situation of loading supporting details from a foreign key table. In this case we want to load all flight details from SFLIGHT and also load the carrier details from SCARR. In ABAP we could of course write an inner join:

ABAP_Sample1.png

However many ABAP developers would take an alternative approach where they perform the join in memory on the application server via internal tables:

ABAP_Sample2.png

This approach can be especially beneficial when combined with the concept of ABAP table buffering. Keep in mind that I’m comparing developer design patterns here, not the actual technical merits of my specific examples. On my system the datasets weren’t actually large enough to show any statistically relevant performance different between these two approaches.

 

Now if we put SAP HANA into the mixture, how would the developer approach change? In HANA the developer should strive to push more of the processing into the database, but the question might be why?

HANA_Sample1.png

Much of the focus on HANA is that it is an in-memory database. I think it’s pretty easy for most any developer to see the advantage of all your data being in fast memory as opposed to relatively slow disk based storage. However if this were the only advantage, we wouldn’t see a huge difference between processing in ABAP. After all ABAP has full table buffering. Ignoring the cost of updates, if we were to buffer both SFLIGHT and SCARR our ABAP table loop join would be pretty fast, but it still wouldn’t be as fast as HANA.

 

The other key points of HANA’s architecture is that in addition to being in-memory; it is also designed for columnar storage and for parallel processing. In the ABAP table loop, each record in the table has to be processed sequentially one record at a time. The current version of ABAP statements such as these just aren’t designed for parallel processing. Instead ABAP leverages multiple cores/CPUs by running different user sessions in separate work processes. HANA on the other hand has the potential to parallelize blocks of data within a single request. The fact that the data is all in memory only further supports this parallelization by making access from multiple CPUs more useful since data can be “fed” to the CPUs that much faster. After all parallization isn’t useful if the CPUs spend most of their cycles waiting on data to process.

 

The other technical aspect at play is the columnar architecture of SAP HANA. When a table is stored columnar, all data for a single column is stored together in memory. Row storage (as even ABAP internal tables are processed), places data a row at time in memory.

This means that for the join condition the CARRID column in each table can be scanned faster because of the arrangement of data. Scans over unneeded data in memory doesn’t have nearly the cost of performing the same operation on disk (because of the need to wait for platter rotation) but there is a cost all the same. Storing the data columnar reduces that cost when performing operations which scan one or more columns as well as optimizing compression routines.

 

For these reasons, developers (and especially ABAP developers) will need to begin to re-think their applications designs. Although SAP has made statements about having SAP HANA running as the database system for the ERP, to extract the maximum benefit of HANA we will also need to push more of the processing from ABAP down into the database. This will mean ABAP developers writing more SQL and interacting more often with the underlying database. The database will no longer be a “bit bucket” to be minimized and abstracted, but instead another tool in the developers’ toolset to be fully leveraged. Even the developer tools for HANA and ABAP will move closer together (but that’s a topic for another day).

 

With that change in direction in mind, I started reading some books on SQL this week. I want to grow my SQL skills beyond what is required in the typical ABAP environment as well as refresh my memory on things that can be done in SQL but perhaps I’ve not touched in a number of years. Right now I’m working through the O’Reilly Learning SQL 2nd Edition by Alan Beaulieu. I’ve found that I can study the SQL specification of HANA all day, but recreating exercises forces me to really use and think through the SQL usage. The book I’m currently studying actually lists all of its SQL examples formatted for MySQL. One of the more interesting aspects of this exercise has been adjusting these examples to run within SAP HANA and more importantly changing some of them to be better optimized for Columnar and In-Memory. I think I’m actually learning more by tweaking examples and seeing what happens than any other aspect.

 

What’s Next

There’s actually lots of aspects of HANA exploration that I can’t talk about yet. While learning the basics and mapping ABAP development aspects onto a future that includes HANA, I also get to work with functionality which is still in early stages of development. That said, I will try and share as much as I can via this blog over time. Already in the next installment I would like to focus on my next task for exploration – SQLScript.

News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.

Ask SAP HANA

Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions:

Χ