Home > Blog > Blog > 2012 > April

The recent and significant increase in negative messaging Oracle is attempting to spread about SAP HANA is pretty amazing. Traditionally at SAP, we’ve taken the high road when it comes to responding to these claims, as nearly 100% of what Oracle says is inaccurate and designed with one thing in mind: To protect their established revenue base. All you have to do is look at what Oracle has stated about the cloud over the last 10 years in order to understand their plan of attack for in memory:

 

Step 1: Halfheartedly acknowledge its presence

Step 2: Continue to push old technology

Step 3: Create fear and doubt related to new innovations

 

We see the same, tired response plan from Oracle emerging for in memory and SAP HANA.

 

Why would Oracle want to innovate anyway? There are billions of dollars’ worth of reasons why Oracle must protect their legacy database technology. Oracle has boxed itself into a corner, where they can’t afford to cannibalize their existing revenue stream, and at the same time are obligated to push massive amounts of hardware from their Sun acquisition – effectively amounting to a digital albatross hanging around their neck.

 

We should also let it be known that comparing this Exalytics bundle/packaged thing is just a red herring for customers. It’s an attempt to imply that HANA isn’t ready to compete with Exadata or Oracle’s core database. It’s an attempt to imply HANA is limited to analytic business scenarios.

 

Bottom line for Oracle, there is no incentive to truly innovate. Is innovation loosely coupling and stacking an RDBMS + TimesTen + Exadata + Endeca on expensive hardware? Clearly they would rather repackage 20 year old, tired products on refrigerator sized servers and overcharge customers for it.

 

In this blog, I will provide a set of facts about SAP HANA and in memory computing. I will try my best to be objective throughout and dispel some of the silliness Oracle is spreading. 

 

Let’s take a look at the inaccurate comparisons and misnomers floating around:

 

  1. TimesTen is a mature database technology vs. SAP HANA
  2. Scenario comparisons of Exalytics vs. SAP HANA
  3. Pricing comparisons between HANA and Exalytics

 

#1: Comparing database feature/functionality:

Oracle is attempting to compare TimesTen to SAP’s HANA database. There are as many 14 attributes that they claim HANA lacks, including in-memory aggregates, multi-dimensional OLAP (MOLAP), in-memory indexes, NUMA support, etc. Ironically, most of these are cumbersome mechanisms that Oracle has patched on to their RDBMS to improve performance, but thanks to HANA’s innovation, are unnecessary overhead and maintenance tasks that customers will be glad to leave behind.


HorseCarriagevsCar2.jpgTo me, this comparison at a very basic level is exactly the same I would have expected to hear from the horse and carriage salesmen when automobiles were first introduced. Oracle is essentially arguing that our “car” is not as good because it doesn’t come with feed for the horses, or a large bucket and shovel that customers can use to clean up their mess.

 

A few facts:

 

  • SAP HANA is a fully ACID compliant database
  • HANA’s design manages and accesses data completely in RAM allowing for speedy retrieval of data over massive volumes and addressing the BigData problems of today and the future.
  • HANA does away with the need for MOLAP, tuning structures such as multiple indexes, aggregates, materialized views, and saves the costly time to build and maintain such structures.
  • HANA leverages parallel queries to efficiently scale out across server nodes as proven in our April 10 scalability testing announcement in which 100 TB of data partitioned across 16 nodes was queried with sub-second response time. Access the white paper on this testing here: https://www.experiencesaphana.com/docs/DOC-1647.
  • HANA handles both unstructured and structured data and has since its inception. 
  • HANA handles both SAP and heterogeneous data equally well.
  • Oracle has not demonstrated how Exalytics with TimesTen can scale out beyond the 1TB limit and Oracle has publicly stated that the usable memory in this configuration is about 300GB.
  • HANA does in fact support ANSI standard SQL syntax, as well as MDX. Just like Oracle extended the ANSI standard with their PL/SQL procedural language, SAP has extended the ANSI standard support in HANA with SQLScript, a procedural language that allows you to write programs with logic not possible to implement in the single-statement SQL language.
  • HANA allows you to manage your data as you choose in either column stores, row stores, or a combination of the two. (Plus other stores/models) Oracle's claim that you must load data into the row store, and then migrate to the column store, is false. Not to mention their claim that columnar data must be migrated back to the row store to be updated and then back to the column store to be queried. This is simply not how HANA works at all.

 

The bottom line: HANA is a next generation solution that replaces many of the tired, legacy products that Oracle continues to re-label as “innovative”.

 

#2: Comparison of use case scenarios between HANA and Exalytics

The easiest way to find out how HANA is transforming our customers is by visiting https://www.experiencesaphana.com/community/implement. Anyone can publicly see proof – compelling business case after case where customers are consolidating IT systems and delivering breakthrough business value with a lower TCO than Oracle.

 

I have a moral objection to this whole comparison Oracle attempts, as the mere existence of Exalytics is just a diversionary tactic. Oracle doesn’t want HANA digging into their RDBMS business or Exadata for that matter, ergo the attempt I mentioned at the beginning of this blog.

 

Let me state (again) that SAP HANA supports analytical functions (e.g. all datamarts – e.g. T-Mobile’s use of customer micro segmentation analysis across millions of customers, across SAP and non-SAP application systems), business functions, planning functions and predictive functions (SAP BusinessObjects Predictive Analysis and Predictive Analytical Library in HANA) as well as transactions natively (e.g. upcoming SAP ERP on HANA). To do this in Exalytics you need - TimesTen, Essbase, Endeca, Oracle RDBMS, etc. This amounts to more money for Oracle and no business breakthrough for the customer...not exactly a “win-win”.

 

What Oracle doesn’t tell you about SAP HANA is that it addresses scenarios that were not feasible before, even with an armada of legacy tools – for example with the business function library in HANA it is now possible, using standard SQL, to execute in-database processes and functions that could not be written in SQL before.

 

With respect to SAP BW customers, Oracle has argued that customers would have to re-code BW applications to work on HANA. This is simply not true, and we’ve published numerous public statements from our customers that have already migrated to BW running on HANA in production.

 

#3 – Pricing comparisons

Oracle has tried to publicly compare Exalytics pricing to HANA and the information presented is grossly misleading. Not only is SAP HANA less expensive in up-front cost than Oracle’s Exadata + Exalytics bundle, (plus all the derivative components you’ll need to make it work) the differences in total cost of ownership are substantial.

 

We didn’t just innovate on the technology platform with SAP HANA, we innovated on pricing as well with a straightforward, simple to understand model based solely on the amount of data held in memory (Unlike Oracle which charges per CPU plus test and development environment fees.) A single unit of HANA (1 HANA unit = 64 GB of RAM) includes the FULL production, test and development licenses a customer needs. It also includes the data modeling and management tools needed to get data into HANA and actually use the product. Even better, HANA gets cheaper over time...the more you buy, the more the list price per unit reduces.

 

Our customers running SAP Business One can purchase a HANA license for as little as €2K. Any customer can purchase a license of SAP HANA Edge Edition for €40k. We also have the SAP HANA Netweaver BW edition for as little as €13k per HANA unit. (64GB of RAM = 1 unit)

 

I am certain Oracle would cite that hardware is extra, so let’s cover it right now. HANA servers from certified partners like Fujitsu are available for as little as $12K. That’s because we don’t force our customers into one hardware stack and then overcharge for it. We have certified partners like IBM, HP, DELL and the list goes on and on for HANA. We believe that the Intel platform, combined with the continued commoditization cycle we’ve seen for the last 40 years in computing will win...period. 

 

Even with rapid growth of data, 95% of enterprises use between 0.5TB - 40 TB of data today.  For this market, at the low end (0.5 TB) the combined cost of hardware and software is approximately $500K and at the high end the pricing is comparable to Exalytics alone today.  In a recent test, we ran a 16 node cluster of 100 TB of uncompressed data, read more in the SAP HANA Performance Whitepaper.

 

Here’s a few facts about HANA pricing you should feel free to share with your Oracle sales rep:

 

  • HANA pricing is inclusive of everything you need, unlike Oracle which charges for the Database licenses & Exadata Storage as well as Grid licensing, Partitioning, OLAP, Diagnostics & Tuning Pack, Grid Control, etc.
  • Oracle charges for Non-Productive environments while SAP doesn’t for HANA. Given that SAP landscapes have anywhere from three to nine instances between development, QA, production, etc.
  • HANA operates efficiently on the most granular level of detail in your data model. No extra indices, no aggregates. Both of which are required to build in Oracle Exadata for performance, and count towards the user space. So you are paying additional fees for “fine tuning”.
  • Planning functions, Business Functions, Predictive capabilities, Full Text Search capabilities etc. are all included in the price of HANA. These are things that Exadata doesn’t offer out of the box. Again, you’ll have to purchase a number of additional Oracle products to accomplish these critical functions.
  • Runtime versions of HANA (e.g. Database Edition for BW) are priced at substantially lower prices as compared to Exadata and deliver a significantly better price/performance ratio.
  • The more HANA you buy, the less expensive the list price per unit becomes.
  • You can scale in 64 GB increments with HANA using Industry-standard servers vs. obligating yourself to Oracle’s ¼, ½ or full rack license, the hardware for which is limited to running Oracle software.
  • SAP offers a BWA to SAP HANA license conversions incentive, which is not available from Oracle today.

 

As I said when I started this blog, SAP doesn’t usually comment on competitor FUD, but I wanted to pause and be clear. SAP will win this market because we rely on fact and figures, real performance and customer success.

 

We are going to continue to deliver real innovation and let customers decide who can better help them build the future, not continue to invest in re-packaging the past!

What are the big challenges for large manufacturers today? To stay competitive in the global market, they have to increase market share by offering product variety and volumes and exploring new markets. They need to control cost to stay profitable, yet fulfil demand in a timely manner. They have to achieve this in the face of unpredictable events such as the March 2011 tsunami in Japan. For instance, the auto industry was affected across the world after the tsunami. That caused disruptions but did not stop growth of new markets. One of the new markets the auto manufacturers have focused on is India. Twenty years ago, customers had a choice of only three models and makes of cars in India; today, dozens of choices are available because all the major auto manufacturers are actively expanding business in India.

 

Manufacturers need flexibility and speed to stay competitive. Business processes such as demand management and planning and scheduling have to be capable of near real-time response. To meet demand quickly and accurately, the response lead time at every stage of planning and production has to be kept short. Since there is huge product and feature variety, data volumes can be enormous. In the case of automotive manufacturing, a single vehicle can be offered in thousands of configurations with annual sales of millions of units. This is where optimization and big data come together.

 

Together, Optessa and HANA offer an optimal solution for manufacturers who deal with large amounts of data and yet need to optimize planning and scheduling of production. Optessa offers proven technology to model manufacturing environments at a high resolution and provides optimal solutions to complex planning and scheduling problems. HANA addresses the key challenge of marshalling big data from different sources and maintaining a live in-memory data model of the demand – manufacturing – supply chain. Optessa on HANA is a unique offering: it is on-demand optimization of live data models for the fast paced manufacturing environment.

 

For more information on Optessa planning and scheduling products please visit www.optessa.com or e-mail info@optessa.com. Optessa is part of the SAP HANA Startup Focus program and will be at Sapphire NOW as one of the participants in the program. Look for us at the HANA Test Drive area in the D&T campus.

Pretend for a moment that a candid review of your product was just posted online, and it’s actually kind of funny.   However, you don’t see it at all; but millions of others do.

 

The explosion of the social web has given each individual the ability to broadcast.  But more importantly, it gives brands the opportunity to connect with these people.   Incredible volumes of social commentary are being shared about your brand at social speed.  Social broadcasters can reach audiences of millions, while content-hungry outlets can turn small events into major news.

 

Staying on top of this incredible volume of data would require rooms full of servers and an army of social media managers…until now. 

 

Introducing, Passenger.   Using the in-memory capabilities of SAP’s HANA, Passenger unlocks revolutionary advancements in converting social activity into actionable results giving you the ability to target people who can influence and impact your bottom line.

 

Jas Dhillon, Passenger’s Chief Product Officer, says, “With SAP’s HANA, we can finally analyze this incredible volume of social data at speeds that are 10,000x faster than previous methods. Not to mention the HANA appliance accomplishes what would take nearly a dozen machines and much more time and resources to accomplish before.”

 

Passenger’s AudienceID allows you to identify individuals talking about your brand, your products, your category at social speed and invite them to join you in a private conversation to explore these topics in more depth.   Using Passenger’s SaaS (Software as a Service) Community products, you have the power to engage these high-profile individuals to serve as the inner circle to your social circle.   This unique approach will turn customers into advisors and ambassadors.

 

However, your community shouldn’t stop there.  These same advisors and ambassadors can serve as a co-creation engine and inform nearly every business decision you make (but that’s a topic for another post.)

 

Companies today need next generation tools in order to keep-up with the new social economy.  Passenger + SAP HANA enable the powerful connections between broadcasters and brands by combining real-time targeting from Big Data and qualitative insights from Private Communities.

 

For more info on Passenger’s social listening and community products, visit thinkpassenger.com.  For more information on SAP’s HANA, visit www.experiencesaphana.com.  Hear more about Passenger’s AudienceID powered by SAP’s HANA at SAPPHIRE NOW. We are a participant in the SAP Startup Focus program, look for us at the HANA Test Drive area in the D&T campus.

To handle the massive amounts of profitability data, many of our customers aggregate their profitability data and/or use BW to speed up the analysis of data. If you or your customers

 

  • Have performance issues with allocations,
  • Want to improve the structure and content of reports, data model, etc., or
  • Are interested in discovering further options to improve the profitability analysis potential,
  • Don´t want long running software projects to get your return on invest,

 

Then, you should definitively discover the advantages of SAP ERP rapid-deployment solution for profitability analysis with SAP HANA!

 

This rapid-deployment solution provides a quick and low-risk implementation of the CO-PA Accelerator. The CO-PA Accelerator enables real time access to massive amounts of profitability data and accelerates the run-times for cost allocations. Data aggregations won´t hinder you any more to reach the level of detail in profitability analysis your company needs – the boundaries of aggregation and not up-to-date data in BW will be broken. This is made  possible by the enhanced CO-PA read module using SAP HANA as a secondary database. Thanks to the non-disruptive deployment of the solution, the end-user can continue to work in his well-known environment. For details on the CO-PA Accelerator have a look at the great blogs from Carsten Hilker on technical background and business benefits.

 

In addition to the speed and the new and profound possibilities of profitability analysis the CO-PA accelerator delivers, the according rapid-deployment solution delivers you the following advantages:

 

  • FAST TO DEPLOY, COMPETITIVELY PRICED and BUILD FOR SUCCESS
  • Rapid-deployment – a fixed-scope implementation service at a predictable price in as little as 3-5 weeks.
  • Predictable costs – Low, predetermined fees and clearly outlined deliverables mean no hidden fees for the delivered solution.
  • Foundation for growth – You are able to expand the SAP ERP rapid-deployment solution for profitability analysis with SAP HANA individually. E.g. by adding SAP BusinessObjects tools as reporting front-end to enable a reporting environment that is flexible and, with it´s graphical self-service analytics, highly end-user friendly.

 

This rapid-deployment solution delivers you a ready-to-run solution with a low risk implementation. As end-customer you don´t need to care about special tool knowledge.

 

If you wish not only to accelerate the existing reports and transactions but also create new, flexible and easy-to-use reports on top, you might want to use SAP BusinessObjects tools. For this, you need of course knowledge on these tools. This solution will give you a new experience of your profitability reporting in only 3-5 weeks of implementation time! Have also a look on the solution on our SAP website, there you will find some more information on the solution.

For most in the BW community you’ve heard of the impending release of SAP NetWeaver BW on the SAP HANA database platform.  Most of this discussion focuses on performance gains and features which will make life easier through improved query execution/results return and optimized business processes/decisions related to those queries.  What is not so often discussed, are a number of new improvements in SAP BW 7.3 which, coupled with SAP HANA, also result in some impressive TCA (Total Cost of Administration) reductions as well.

 

Chief among the improvements from an IT/Administrator’s perspective are performance gains related to Enterprise Data Warehousing constructs within SAP BW. This side of the discussions centers largely on the Data Store Object (DSO) data container within SAP BW.  The DSO in BW on HANA has a revised data model and activation process which takes significant advantage of SAP HANA columnar data processing capabilities.  During the activation phase of in-memory enabled DSOs, SAP BW running on SAP HANA has shown dramatic improvement in data load times.  This falls directly in-line with anticipated gains from columnar-based data management and, especially SAP HANA.  This is due to one of the most processing intensive steps in the data load phase of the DSO is the check against existing values in the DSO to either calculate delta (net change) or insert as a new record

 

DSO performance improvements also carry over to the Semantic  Partitioned Object (SPO) available with SAP BW 7.3 including SAP BW on HANA.  This is a key ingredient in so-called scale-out scenarios anticipated for large-scale SAP BW implementations.  SPO implementation has long been understood as a best practice for SAP BW implemented as an Enterprise Data Warehouse.  Many customers implemented an SPO-like capability in releases prior to SAP BW 7.3 as part of their implementation along the guidelines of Layered Scalable Architecture (LSA).  Starting with SAP BW 7.3 SPO is a native capability of SAP BW.  With SAP HANA as the underlying database, implementing SPOs for LSA-prescribed vertical/ logical partitioning allows SAP BW to take further advantage of database  partitioning capabilities. This feature will definitely mature over time as we gain further experience in Ramp-Up with SAP HANA scale-out capabilities applied to SAP BW.

 

On the frontend, SAP BW 7.3 also provides a number of Agile BI capabilities.  Our early adopter customers have provided very positive feedback on the agility and applicability of BW Workspaces functionality.  Workspaces allow administrator-controlled integration of local (Excel, CSV) data from a user’s computer to be integrated with EDW data existing in BW.  It allows a controlled environment where users can work within boundaries defined by IT, but introduces some modeling flexibility which is typically challenging due to governance policies in an EDW environment.  Workspaces are also the integration point for BW-hosted data and SAP HANA-hosted “non-SAP” data (e.g. merge EDW data in BW with non-SAP data mart data hosted in HANA data models).  Overall, this is a feature which BW-experienced IT resources should investigate as part of the value proposition for upgrading to SAP BW 7.3 running on SAP HANA.

 

Overall, implementing SAP BW on HANA simply provides further capabilities associated with SAP BW as a Data Warehouse Management System (DWMS).  SAP BW has long provided significant value-add to IT departments looking to build out an Enterprise Data Warehouse versus building-by-hand in a traditional relational database environment.  Features like SAP BW Business Content, SAP BASIS Software Logistics and Database tools, SAP NetWeaver Application Server optimization and scaling capabilities and SAP BW’s capability as an Application Platform for SAP applications provide a full set of reasons for IT Administrators to select SAP BW as a preferred Enterprise Data Warehousing environment.  Implementing SAP HANA simply builds on these traditional advantages by bringing in-memory technologies to bear on the EDW and providing major opportunities for further optimization which are simply not possible in traditional relational database environments.

 

Make sure you check out all the additional info on SCN for further news of SAP NetWeaver BW powered by SAP HANA!

One of the challenges facing social media analysis today is manipulating the sheer volume of data. One can imagine the task of sorting, correlating and matching the massive amount of information that spans multiple social channels such as Twitter, Facebook, LinkedIn, blogs, and other social sites and then presenting the results nearly instantaneously. That's the challenge being addressed by NextPrinciples and SAP's HANA team for the upcoming SAPPHIRE NOW conference in Orlando 2012.


NextPrinciples has created solutions that enable sales and marketing teams to increase the view of the customer and offer benefits that companies and consumers have dreamed of but previously have been out of technical reach.


The combination of NextPrinciples' Insight-to-Action Social CRM and SAP's HANA in-memory database allows for social profiles to be imported into SAP HANA and combined with business partner master data from SAP CRM. This approach combines traditional enterprise views of the customer with enhanced social views creating what Next Principles refers to as the Uber Profile.


Insight-to-Action aggregates, analyzes, and acts on social media interactions across multiple social channels to determine influencers in a particular space.  The combined solution enables users to analyze aggregated information from multiple social media channels, identify influencers and persons of interest and bring this social view into SAP CRM to augment the enterprise view of an existing customer or add a record of a new prospect.

This enhanced view allows for users of SAP CRM to use social attributes (e.g. number of followers on Twitter) to carry out multi-dimensional segmentation for campaigns.


The 300x performance boost provided by SAP HANA is the reason that the solution can quickly carry out matching between CRM and Social Media data and perform segmentation at high speeds.  This capability provides a low time to value for users of SAP CRM.

 

One can imagine taking this capability a step further and carrying out real-time analytics in a multitude of retail scenarios. For example, when a customer calls in to a help desk or when the customer walks in to a retail store. Once again the power of SAP HANA enables the solution to handle significant volumes of Social Media data and perform sophisticated network analysis in real-time in order to provide up-to-date profile information on customers, events, and communities. The power of SAP HANA is enabling new types of services to hit the market and we are excited to pioneer such solutions. 


Interested in learning more? Visit NextPrinciples at SAPPHIRE NOW May 14 – 16 to experience the solution first hand. NextPrinciples is a participant in the SAP Startup Focus program, look for us at the HANA Test Drive area in the D&T campus at SAPPHIRE NOW.


We want to hear from you. Tell us how you would use this capability.

 

“At this very moment, there’s an odds-on chance that someone in your organization is making a poor decision on the basis of information that was enormously expensive to collect.”

 

While not exactly revolutionary, this statement should make you sit up and take notice. I did. This is a direct quote from an article titled, “Good Data Won’t Guarantee Good Decisions,” that appeared in the April 2012 issue of the Harvard Business Review (by Svetank Shah, Andrew Horne and Jaime Capella).

 

In this interesting article the authors set out to make the case that most companies have too few analytics-savvy workers, and there is a strong need to develop them so that there are sufficient “informed skeptics” to make good decisions based on good data. The authors should be commended for pointing out the process and people aspects of good decisions, but the need for good data itself does not get diminished at all, not to mention the need to obtain this data in an economical manner.

 

In my experience, the reason we often face a paucity of informed skeptics in any organization is because their need has not been adequately exposed. Too often, poor decisions have been blamed on poor data, or, to be more precise, poor analytics based on crunching ever-increasing volumes of data. Thus, the need for such decision makers has often been masked by the problem of lacking the ability to deal with big data. And, this problem has persisted because historically it has often been “enormously expensive,” or even prohibitively so, to get to good data.

 

This is why I am excited about this people/process benefit that can accrue from a successful implementation of SAP HANA. Your ability, in-memory, to crunch large volumes of data will give you the ability to massage your data any which way you desire, thus producing a wide range of fresh analytics possibilities. The processing power and speed will address the problem of good data being “enormously expensive to collect.” Armed with this kind of unprecedented look at your enterprise data, you will be able to tell if you have the right people asking the right questions or processing the answers in the right manner. It will no longer be decision-making limited by data quality, processing power, or the economics of crunching big data. You will be able to establish an “A” team so you can compete better – largely because of your new-found ability to identify whether or not you have the right skills addressing the output of your analytics exercise.

 

If you are waiting to make your In-Memory/SAP HANA decision, wait no longer; as I mentioned in an earlier blog post (Café Innovation – Imitation is the sincerest form of flattery, on SCN), “The market, including SAP's competition, has responded with an endorsement of the values that are foundational to the notion of SAP HANA.” It is not a matter of debate anymore, whether or not In-Memory computing is here to stay. You merely need to make the decision about how soon you wish to embark on it.

 

SAPPHIRE NOW in Orlando, starting on May 14, 2012, is one great opportunity to learn how this movement is growing and what you can/should do to accelerate your own initiative. Come, see, and conquer (your competition).

 

P.S. You can follow my other blog posts at: Café Innovation Blog on the SAP Community Network (SCN)

Hi everyone

 

Just wanted to give you a quick update on what to expect from the SAP HANA Test Drive at SAPPHIRE NOW this year.  Last year, the test drive was a massive success.  Bigger and more innovative than anything we've ever done at Sapphire and attendees were thrilled.  The test drive had the most visitors of any SAP area of the show and people spent HOURS hanging out, playing with the system, talking to experts, etc.Truly a huge hit.

 

Well, in the SAP world, success like that is punished with higher expectations and more work the next year.  So, this year, the SAP HANA Test Drive will have DOUBLE the space we had last year, DOUBLE the staff to show demos and answer technical questions, DOUBLE the SAP HANA servers on the show floor.  Plus, we've added two dedicated technical tables for any of the "geeks" to come by and dig into the code.  We're also building a completely new display unit to show off everything.  We'll have live SAP HANA servers from all 7 certified hardware partners:  Cisco, Dell, Fujitsu, Hitachi, HP, IBM and NEC.  Plus, a super cool SAP HANA motherboard from the Intel factory.

 

Hanafinalv2.jpg

 

 

Here's the short list of the test drive stations. Stop by and take SAP HANA for a spin

 

  • SAP BW on HANA
  • SAP HANA & Big Data
  • SAP HANA Ecosystem
  • Real Time Business Apps on SAP HANA
  • Analytic & Planning Apps on SAP HANA
  • SAP HANA Technical (Admin/Ops/Data Provisioning/Data Modeling/ABAP/JAVA/BOBJ/SQLScript)
  • SAP HANA Architecture & Deployment
  • BI/Analytics on SAP HANA

As many of you have probably heard already, SAP NetWeaver BW Powered by SAP HANA is now available for General Availability as of last week. This is incredible news as all customers are able now to migrate their existing BW system to BW on HANA or start with a new BW on HANA installation. One of the main benefits of BW Powered by HANA is the tremendous reporting performance improvement. Since all data in the entire BW on HANA system resides in memory we have seen reports running more than 400 times faster during ramp-up!

 

BWA versus BW Powered by HANA

In 2006 SAP released the BW Accelerator (BWA) as a side-by-side in-memory appliance to speed up query performance for BW reporting. Since then many customers have invested in this solution and constantly increased end user productivity by accelerating reporting on top of BW by using BWA. From a reporting perspective it might look like that BWA provides the same functionality as BW Powered by HANA. However, there are quite some differences between those two solutions. With BWA only a subset of data can be accelerated and there is an additional effort required to replicate that data from the relational database of the BW system to BWA. With BW on HANA all data is automatically held in memory and thus every report runs much faster than before without any additional work required. BWA is not required anymore which simplifies your IT landscape. Furthermore, with BW on HANA data loads within a BW system are accelerated as well. That means the entire data provisioning process is much faster, not just the reporting on top of individual InfoProviders. BWA doesn’t help to speed up planning processes nor supports easier data modeling and remodeling. However, BW on HANA provides great improvements in those areas. BWA was introduced to speed up reporting performance only, whereas with BW on HANA the entire system is powered by an in-memory platform that gives access to real-time data from SAP and non-SAP sources.

 

What’s next?
Going forward BW customers have a choice when it comes to solutions that improve the performance of their BW systems. They can continue with BWA or move to BW on HANA. SAP will continue to offer BWA as a standalone solution and there are plans about a next generation BWA. Our general recommendation however is to move to BW Powered by HANA as it offers the best solution for building a supercharged data warehouse with SAP NetWeaver BW. For those customers who have already made a BWA investment and want to move to BW on HANA, SAP will offer a license conversion program to ensure their BWA investment is protected.
All our efforts are focused on continuing the great work of further integrating the SAP HANA platform with BW to ensure that we offer the best BW data warehouse for our SAP customers.

Introduction

In this first edition of this HANA Developer's Journey I barely scratched the surface on some of the ways which a developer might begin their transition into the HANA world. Today I want to describe a scenario I've been studying quite a lot in the past few days: accessing HANA from ABAP in the current state.  By this, I mean what can be built today.  We all know that SAP has some exciting plans for ABAP specific functionality on top of HANA, but what everyone might not know is how much can be done today when HANA runs as a secondary database for your current ABAP based systems.  This is exactly how SAP is building the current HANA Accelerators, so it’s worth taking a little time to study how these are built and what development options within the ABAP environment support this scenario.

 

HANA as a Secondary Database

The scenario I'm describing is one that is quite common right now for HANA implementations.  You install HANA as a secondary database instead of a replacement for your current database.  You then use replication to move a copy of the data to the HANA system. Your ABAP applications can then be accelerated by reading data from the HANA copy instead of the local database. Throughout the rest of this blog I want to discuss the technical options for how you can perform that accelerated read.

Replication.png

ABAP Secondary Database Connection

ABAP has long had the ability to make a secondary database connection.  This allows ABAP programs to access a database system other than the local database. This secondary database connection can even be of a completely different DBMS vendor type. This functionality is extended to support SAP HANA for all the NetWeaver release levels from 7.00 and beyond. Service Note 1517236 (for SAP Internal) 1597627 (for everyone) lists the preconditions and technical steps for connection to HANA systems and should always be the master guide for these preconditions, however I will summarize the current state at the time of publication of this blog.

 

Preconditions

  • SAP HANA Client is installed on each ABAP Application Server. ABAP Application Server Operating System must support the HANA Client (check Platform Availability Matrix for supported operating systems).
  • SAP HANA DBSL is installed (this is the Database specific library which is part of the ABAP Kernel)
  • The SAP HANA DBSL is only available for the ABAP Kernel 7.20
    • Kernel 7.20 is already the kernel for NetWeaver 7.02, 7.03, 7.20, 7.30 and 7.31
    • Kernel 7.20 is backward compatible and can also be applied to NetWeaver 7.00, 7.01, 7.10, and 7.11
  • Your ABAP system must be Unicode

 

Next, your ABAP system must be configured to connect to this alternative database. You have one central location where you maintain the database connection string, username and password.  Your applications then only need to specify the configuration key for the database making the connection information application independent.

 

This configuration can be done via table maintenance (Transaction SM30) for table DBCON. From the configuration screen you supply the DBMS type (HDB for HANA), the user name and password you want to use for all connections and the connection string. Be sure to include the port number for HANA systems. It should be 3<Instance Number>15. So if your HANA Database was instance 01, the port would be 30115.

DBCON.png

DBCON can also be maintained via transaction DBACOCKPIT. Ultimately you end up with the same entry information as DBCON, but you get a little more information (such as the default Schema) and you can test the connection information from here.

DBACOCKPIT.png

Secondary Database Connection Via Open SQL

The easiest solution for performing SQL operations from ABAP to your secondary database connection is to use the same Open SQL statements which ABAP developers are already familiar with. If you supply the additional syntax of CONNECTION (dbcon), you can force the Open SQL statement to be performed against the alternative database connection.

 

For instance, let’s take a simple Select and perform it against our HANA database:

  SELECT * FROM sflight CONNECTION ('AB1')
    INTO TABLE lt_sflight
   WHERE carrid = 'LH'.

 

The advantage of this approach is in its simplicity.  With one minor addition to existing SQL Statements you can instead redirect your operation to HANA. The downside is that the table or view you are accessing must exist in the ABAP Data Dictionary. That isn't a huge problem for this Accelerator scenario considering the data all resides in the local ABAP DBMS and gets replicated to HANA. In this situation we will always have local copies of the tables in the ABAP Data Dictionary.  This does mean that you can't access HANA specific artifacts like Analytic Views or Database Procedures. You also couldn't access any tables which use HANA as their own/primary persistence.

 

Secondary Database Connection Via Native SQL

ABAP also has the ability to utilize Native SQL. In this situation you write you database specific SQL statements.  This allows you to access tables and other artifacts which only exist in the underlying database.  There is also syntax in Native SQL to allow you to call Database Procedures.  If we take the example from above, we can rewrite it using Native SQL:

 

EXEC SQL.
    connect to 'AB1' as 'AB1'
  ENDEXEC.
  EXEC SQL.
    open dbcur for select * from sflight where mandt = :sy-mandt and carrid = 'LH'
  ENDEXEC.
  DO.
    EXEC SQL.
      fetch next dbcur into :ls_sflight
    ENDEXEC.
    IF sy-subrc NE 0.
      EXIT.
    ELSE.
      APPEND ls_sflight TO lt_sflight.
    ENDIF.
  ENDDO.
  EXEC SQL.
    close dbcur
  ENDEXEC.
  EXEC SQL.
    disconnect 'AB1'
  ENDEXEC.

 

Its certainly more code than the Open SQL option and a little less elegant because we are working with database cursors to bring back an array of data.  However the upside is access to features we wouldn't have otherwise. For example I can insert data into a HANA table and use the HANA database sequence for the number range or built in database functions like now().

 

    EXEC SQL.
      insert into "REALREAL"."realreal.db/ORDER_HEADER"
       values("REALREAL"."realreal.db/ORDER_SEQ".NEXTVAL,
                   :lv_date,:lv_buyer,:lv_processor,:lv_amount,now() )
    ENDEXEC.
    EXEC SQL.
      insert into "REALREAL"."realreal.db/ORDER_ITEM" values((select max(ORDER_KEY)
        from "REALREAL"."realreal.db/ORDER_HEADER"),0,:lv_product,:lv_quantity,:lv_amount)
    ENDEXEC.

 

The other disadvantage to Native SQL via EXEC SQL is that there are little to no syntax checks on the SQL statements which you create. Errors aren't caught until runtime and can lead to short dumps if the exceptions aren't properly handled.  This makes testing absolutely essential.

 

Secondary Database Connection via Native SQL - ADBC

There is a third option that provides the benefits of the Native SQL connection via EXEC SQL, but also improves on some of the limitations.  This is the concept of ADBC - ABAP Database Connectivity.  Basically it is a series of classes (CL_SQL*) which simplify and abstract the EXEC SQL blocks. For example we could once again rewrite our SELECT * FROM SFLIGHT example:

 

****Create the SQL Connection and pass in the DBCON ID to state which Database Connection will be used
  DATA lr_sql TYPE REF TO cl_sql_statement.
  CREATE OBJECT lr_sql
    EXPORTING
      con_ref = cl_sql_connection=>get_connection( 'AB1' ).
****Execute a query, passing in the query string and receiving a result set object
  DATA lr_result TYPE REF TO cl_sql_result_set.
  lr_result = lr_sql->execute_query(
    |SELECT * FROM SFLIGHT WHERE MANDT = { sy-mandt } AND CARRID = 'LH'| ).
****All data (parameters in, results sets back) is done via data references
  DATA lr_sflight TYPE REF TO data.
  GET REFERENCE OF lt_sflight INTO lr_sflight.
****Get the result data set back into our ABAP internal table
  lr_result->set_param_table( lr_sflight ).
  lr_result->next_package( ).
  lr_result->close( ).

 

Here we at least remove the step-wise processing of the Database Cursor and instead read an entire package of data back into our internal table at once.  By default the initial package size will return all resulting records, but you can also specify any package size you wish thereby tuning processing for large return result sets.  Most importantly for HANA situations, however, is that ADBC also lets you access non-Data Dictionary artifacts including HANA Stored Procedures.  Given the advantages of ADBC over EXEC SQL, it is SAP's recommendation that you always try to use the ADBC class based interfaces.

 

Closing

This is really just the beginning of what you could with this Accelerator approach to ABAP integration into SAP HANA. I've used very simplistic SQL statements in my examples on purpose so that I could instead focus on the details of how the technical integration works.  However, the real power comes when you execute more powerful statements (SELECT SUM ... GROUP BY), access HANA specific artifacts (like OLAP Views upon OLTP tables), or database procedures.  These are all topics which I will explore more in future editions of this blog.

SAP HANA Essentials Cover.jpg

 

In an earlier blog, I tried to distill some of the lessons I’ve learned the hard way about publishing. In this blog, I’ll try and explain a little bit about the very different approach to publishing we’re taking for the SAP HANA Essentials book that will be coming out in May.

 

Seth Godin, a very successful author and ebook pioneer, just wrote a blog about how he approached the launch his latest book.  I literally could just copy/paste what he wrote and put in the title of my book for this blog—it was that spot on with the approach we’re taking with the SAP HANA Essentials book.

 

The basic issue is that in 2012, getting an idea out to the world requires a completely different approach than in the past.  The speed of content development, digital delivery and digital reading have truly changed the way you disseminate knowledge to a wide audience.  You still have to do all the hard work of writing a really great book, but everything after that has been completely disrupted from how it was even 5 years ago.  Imagine converting all of Major League Baseball into cricket teams—it’s that big of a change.

 

For the SAP HANA Essentials book, we started writing in earnest in mid-January, but I had been speaking with traditional publishers for a few months before that to try and find a way to do the HANA book very differently than any of my other books.  Publishers are in business to make money and that’s fine, however the “profit motive” that they have doesn’t always line up with the strategic goals that SAP has for knowledge dissemination.  That’s ok for most books, but for SAP HANA, since we’re breaking all kinds of rules in the tech world, we figured that we might as well break a whole lot of rules in the publishing world as well. 

 

 

Here’s a few of the different goals and issues that drive the need for a different approach this time.

 

  • SAP HANA is a hugely strategic topic in the SAP ecosystem and knowledge is in short supply globally
  • SAP HANA is evolving at a mind-blowing pace and content must be constantly kept up-to-date
  • There’s lots of techie content, but its scattered around the web and not easily found
  • Nearly every SAP user, consultant and developer already has a Kindle, iPhone or iPad already (if they don’t, they all have a PC)
  • SAP makes money selling software and services, not selling books
  • SAP already operates on a Spring/Fall corporate “education” clock with the Sapphire and TechEd events as anchors to the calendar

 

So, in order to deliver a book that “works” with all those factors, we had to do something radically different.

 

It’s actually quite fitting that we’re writing a book in “real-time” about the engine that powers the “real-time enterprise”.  Instead of waiting until all of the chapters are completed, we’re going to release the first few chapters in May and then release blocks of new chapters as they’re finished.  So many people have told me how desperate they are to get the book as quickly as possible (for them and their teams), so if we can’t get the whole thing completed till the end of the year, the best thing we can do is to get as much high-quality content out to everyone as quickly as it’s available, and not wait around until its completely finished to ship the whole thing.

 

Secondly, since we’re making the book available for free, we had to figure out a way to deliver it to the widest audience at the lowest cost.  Sadly, Amazon won’t let you permanently offer a book for free on their site and there are lots of restrictions on which countries can and can’t order books.  So, we’ve partnered with an ebook retailer in Germany to make the book available globally in both epub and kindle formats, completely free of DRM. We want a million people to get the info in the book, so locking it to one ereader or another with DRM is counterproductive.  Anybody on earth can go to the SAP HANA Essentials download site, put in a voucher code and get his or her preferred format for free.  They can read it on their laptop, iPad or Kindle instantly.  They can share it with colleagues easily.  No unnecessary friction to inhibit people acquiring and reading the book.

 

Thirdly, since it’s an ebook, we can update it frequently, with new chapters, changes in content, better links, etc.  In the print world, once it ships from the printer, it’s gone for good.  You have to do an expensive revision and reprint to change anything in a print book and it takes years to get one of those done.

 

So, in essence, we’re doing the SAP HANA Essentials book “faster, better and cheaper” than any SAP book in history and if it works out like we’re hoping, it should be the “best selling” SAP book in history.  But, since we’re not really selling it, I’ll settle for the “most widely read” title[1].

 

 

 


Running reports today for most organizations is a time intensive practice that can impact performance of the financial team in many other areas during end closing periods. The Finance team and Controllers feel like they are in a car race with a commuter car, while really needing a powerful machine to process and analyze large volumes of data and to complete the reports on time. Therefore, the speed and performance of the system impacts the performance of the teams. The situation calls for an accelerator pedal, floored to the max, to beat the reporting clock.

 

There are a few clinchers in the race for time with end-of-month financial reports. Many organizations run reports through their Enterprise Resource Planning (ERP) which typically run on three- to four-year-old technology. It was fast then, but there are faster options, now. For example, does your ERP system allow you to run and produce the reports as fast as you want with drilldown reporting based on Totals table(s)? Maybe you have challenges with BI reporting based on the Totals tables(s), or problems creating financial statements for period accounting. It could be that you have performance issues in line item reporting (Transaction Code: FAGLL03), too, and want more speed. Performance problems can slow the entire organization down, run up the cost of reporting, and affect the decision-making that the reports support. The race, after all, can often go to those who make better decisions faster. If you caught yourself nodding, there is an SAP ERP package that runs on SAP HANA that will be of interest to you and your reporting dilemma.


SAP’s HANA rapid-deployment solution for accelerated finance and controlling supports finance organizations with faster access to vast amounts of ledger, cost and material ledger data, enhanced with business warehouse data sources and new virtual information providers. This enables an easy exploration of trusted and detailed data.


The rapid-deployment solution offers four fixed-scope implementation scenarios – financial accounting, controlling, material ledger and production cost analysis. Each can be implemented individually or in any combination. The quick-time-to-value implementation methodology enables customers to go live in as little as four weeks.

 

This approach lets you deploy the package in a non-disruptive way to your existing ERP system. We did some tests on the improvements we could reach with the accelerator. Let me give you some example numbers:

 

  • Financial Statements Report RFBILA00 accelerated as RFBILA00N:
    • For 15.000 items the runtime was improved from 15.6s to 5s
    • For millions of line items the runtime was improved from 45 min to 1.5 min
  • Accelerated reporting in Asset Accounting (FI-AA):       
    • At a productive customer environment (300.000 assets, 8 Mio posted depreciations) measured runtimes decreased from 650 s to 1.5 s (factor 400)

 

A detailed list of transactions and reports which are being accelerated by this solution are listed in the scope document attached to this blog. Also, details on the provided virtual info providers and BW data sources are attached. You will be surprised how many reports you can accelerate with SAP HANA with this rapid-deployment solution!

 

 

But what if your needs go beyond the scope of this rapid-deployment solution? Let me quickly describe what skills you would need to add further functionality to the solution.

 

To transform your customized ABAP reporting on SAP HANA, you simply program and connect the program to tables in SAP HANA. That’s the first step in reporting with SAP ERP rapid-deployment solution for accelerated finance and controlling with SAP. To build up additional reports, you need the HANA studio and a little bit of experience on how to work with it. If you not only want to accelerate the existing reports and transactions, but also create new, flexible and easy-to-use reports on top, just use the SAP BusinessObjects tools. To build new SAP HANA models, all you need to know is how your data originating in ERP is structured, as well as knowledge on the SAP HANA data structure. If you are new to SAP HANA, you will need to up-skill in the use of the cool SAP HANA Studios tool, and you can begin to build your models.

 

This financial solution will surprise you with how easy it is to start working with the powerful SAP HANA. Your reports will be accelerated to unprecedented speeds, so you can leave those old reporting cycle times in the dust. 

HANA is SAP’s newest database technology, and perhaps the most far reaching development for SAP in this decade.  Would you like to know some of the deployment scenarios possible with HANA? Do you want to catch a glimpse of the overall HANA technical solutions? Do you want to understand how in-memory database deal with system failure? You may also want to know how you can load the data into HANA, or how you can leverage HANA computation capability within your system.   Are you interested in knowing how HANA supports data modeling and data analysis?


Now, you can review those questions in the latest technical overview white paper to help you getting the basic answers. This white paper sheds light onto the different technological perspectives and provides a basic technical overview.  It will give you a blueprint of HANA technology and will attempt to address as many topics as possible within its scope. A few new directions for HANA are also outlined in this paper, such as the new information composer tool which allows you to build a model much easier and faster, HANA scale-out capabilities, or the HANA application in the cloud.  We hope that this document will help you to understand the basics of this new and cutting-edge technology.


For more details, we encourage you to pick up other HANA documents, such as Hasso’s book (“In-Memory Data Management – An Inflection Point for Enterprise Applications” to gain more insight.  You can also check another white paper "SAP HANA™ for Next-Generation Business Applications and Real-Time Analytics"

 

As Hasso said, HANA is a revolution, and this document is an entry point to this revolutionary new SAP chapter.

After one of the shortest and most successful Ramp Up periods in SAP’s history,  BW Powered by SAP HANA is now available for General Availability (GA). This means that every BW customer can  take advantage of running SAP HANA as the powerful in-memory database underneath their BW system to dramatically improve performance, speed up data loads, shorten the decision support window and streamline their IT landscape.

 

We started our Ramp Up period back in early Nov 2011 and signed up over 60 customers to put BW on HANA through its paces. And the feedback from these early adopters has been very positive. Here are quotes from some of these customers:

 

“We are so satisfied with the operation and speed of the entire BW on HANA implementation. According to end users' feedback, the speed of their standard BW reports  is very fast and has been improved greatly. In addition, the running of the whole production system is stable with  fast speed of the data extract, ETL, and incremental updates”

 

“With SAP HANA, we see a tremendous opportunity to dramatically improve our enterprise data warehouse solutions with SAP NetWeaver BW powered by SAP HANA, drastically reducing data latency and improving speed when we can return query results in 45 seconds versus waiting up to 20 minutes for empty results from SAP NetWeaver BW on Oracle.

 

So with BW Powered by HANA now ready for GA, I encourage you to find out more about this new release by exploring  ExperienceSAPHANA.com.  We have uploaded some great content to help you learn, hear and see BW on HANA for yourself. Besides the BW on HANA Blog Series, we have also created a number of videos presented by SAP experts that will help you better understand some of the more technical aspects of running SAP HANA under your BW system. And we will continue to add more content to ensure you have a wealth of information to reference as you look to supercharging your BW data warehouse with SAP HANA.

The SAP HANA platform was created to be wicked fast and yet easy to administer—but most of all it is developed to run the mission critical applications found in today’s enterprise as-well-as the solutions being developed by startups solving the problems of tomorrow. Thus far in SAP HANA’s short but impressive history we have had startup forums and developer boot-camps, but today it was announced at the press conference in San Francisco that SAP is taking our commitment to SAP HANA and startups to a whole new level. SAP Ventures, launched the Real-Time 100 Million Euro Fund specifically for startups who are addressing big data challenges leveraging SAP HANA as their application database.

 

SAP Ventures partners with outstanding entrepreneurs worldwide to build industry-leading businesses. They are known for finding innovative and disruptive companies as well as proven companies – to invest and partner with to fuel their growth. Companies now have the technology through SAP HANA to solve big data problems in seconds not days or weeks—with the new Real-Time Fund from SAP Ventures SAP will help bring the new solutions to market and to our customers.

 

Keep your eye on this fund as it focuses on the best and brightest opportunities solving growing problems around big data, security and social media trends. Now small cap companies and startups can more fully embrace SAP HANA through SAP Ventures.

 

It’s a new day and a new way to change the world with SAP HANA and SAP Ventures.

Customer Performance Requirements

I was recently asked by a customer to recommend a solution that could solve the problem of performing analytics on data getting generated at ATMs and POS machines. The problem in essence was that while the data was getting generated there was no way to query it on the fly. Traditional databases like Oracle, DB2, SQL Server and others were not able to solve this problem with their current product set since these databases were not built with the purpose of analyzing large amounts of data online. The traditional databases also required constant tuning, indexing and materializing the data before you could run any sort of business intelligence query on it. Essentially someone had to prepare the data and make it query ready and to say the least this costs a lot of time and money which this particular customer was trying to avoid.

 

Linear Scaling for Big Data in Real Time

In my opinion the only way to do this was using an in-memory database like SAP HANA which was built with the purpose of running analytics on live data. I did have some doubts about HANA’s scalability and requested SAP for guidance. They briefed me about a recent scale-out, testing of HANA where they simulated 100 TB of actual BW customer raw data over a SAP Certified configuration of  a 16 node cluster with 4 IBM X5 CPUs each having 10 cores and 512 GB memory. The test data consisted of 100TB test database with one large fact table (85TB, 100 billion records) and several dimension tables. 20x data compression was observed, resulting in a 4TB HANA instance, distributed equally on the 16 nodes (238GB per node). Without indexing, materializing the data, or caching the query results, the queries ran between 300 to 500 milliseconds, which in my opinion close enough to real time. There were also ad-hoc analytic query scenarios where materialized views cannot be easily used, such as listing top 100 customers in a sliding time window, and year-to-year comparisons for a given month or quarter.

 

In my opinion these tests demonstrate that SAP HANA offers linear scalability with sustained performance at large data volumes. Very advanced compression methods were applied directly to the columnar database without degrading the query performance. Standard BW workload provides validation for not only SAP BW customers, but any data mart use cases. This is the first time I have encountered a solution offering BW the potential to access raw transactional ERP data in virtual real-time.

 

Data Management Architecture for Next-generation of Analytics

Readers of this blog may be also interested in knowing that new business intelligence optimized databases such as HANA have inherent architectural advantages over traditional databases. Old database architectures were optimized for transactional data storage on disk-based systems. These products focused more on transactional integrity during the age of single CPU machines connected through low-bandwidth distributed networks while optimizing the use of expensive memory. The computing environment has changed significantly over last decade. With multi-core architectures becoming available through commodity hardware, processing large volumes data in real-time over high-speed distributed networks is becoming a reality due to products such SAP HANA.

 

All in-memory Database Appliances are Not Created Equal

Apparently some solutions in the market like Oracle’s Exadata, also cache the data in Exalytic/TimesTen  for in-memory acceleration. However, TimesTen is a row-based in memory database and not a columnar database like HANA which are faster for business intelligence applications. Oracle also uses these databases for in-memory cache, not like HANA which is the primary data persistence layer for BW or data mart. Therefore in my opinion, Oracle’s solution is more suited for faster transactional performance but creates data latency issues for real-time data required for analytics. From a cost and effort perspective it will also require significant amount of tuning and a large database maintenance effort when doing ad-hoc queries (sliding time-window or month2month comparison…etc) because you are trying to re-configure an architecture that is meant for transactional systems to deploy for analytics.

 

I hope this blog is useful and provides general guidelines to people interested in considering new database technologies like SAP HANA.

The biggest, best, and most inquisitive

The mission? To research, teach, and heal. Charité Berlin, the biggest university hospital in Europe, provides 150,000 inpatient and 600,000 outpatient treatments per year. The hospital’s 3,800 doctors and scientists are committed to the highest levels of healthcare and research – and the organization is equally committed to providing the accurate, timely reports and analysis required for success.

 

Charité already has a mature analytics program. This enables them to think creatively about how they use patient data, medical records, and study results within their business. Researchers wanted to look at millions of data points and ask questions in a flexible reporting environment – and they wanted to make their in-house analytics systems as fast and easy to use as a Google search. To make this possible, the hospital invested in SAP in-memory technology designed to harness the big data associated with medial records. Already, more than 600 users are taking advantage of the technology.

 

Getting creative within budget constraints

When the SAP HANA platform was first introduced at Charité, the hospital held workshops to brainstorm use cases, focusing on several key questions: What is possible, and where might the university make improvements? Where could the technology have the biggest impact on patient care and healthcare research? Can we use it to look for trends in patient cases?

 

For its first project with SAP HANA, the team decided to focus on the hospital’s cancer database and its use in selecting patients for clinical trials. A typical clinical trial involves a research partnership with a commercial sponsor such as a medical device or pharmaceutical company; the hospital has only a small window of opportunity in which to get the study, participants, and funding in place. The faster Charité can identify suitable study participants, the greater its chances of landing the study and conducting the research.

Rapid identification of clinical trial candidates

Charité now uses the SAP HANA Oncolyzer to analyze data merged from its cancer and medical admin databases to find the best candidates for each new trial. The Oncolyzer searches and examines information such as tumor types, gender, age, risk factors, treatments, and diagnoses – to find the best candidates based on the study criteria. In the future, when DNA is added to the data set, the Oncolyzer will analyze up to 500,000 data points per patient. Both structured and unstructured data is analyzed, accelerating the identification process greatly – and giving Charité a competitive advantage over other prospective research partners.

Creatively applying HANA to solve intriguing business problems

In an industry other than healthcare? SAP HANA is equally applicable to thousands of other business situations. For example, an oil company seeking to evaluate the potential of each well in its inventory can quickly analyze data that includes geography, flow history, current output, producer, supporting firms, and more, then match individual wells against particular objectives. Need to identify a well with certain characteristics or generate insights into deployed assets and maintenance cycles? No problem. Or suppose your company manages a large and thriving port. Need to optimize harbor management? An in-memory solution can analyze incoming ship data, meter information, average terminal time, and more. The applications are endless. Whatever the industry, whatever the type of information, SAP HANA can pull together structured and unstructured data and rapidly generate additional value out of every asset.


How do I get started?

What can SAP HANA do for you? Start by gathering your organization’s best minds to think about common industry problems. Think of assets in new ways, and ask new questions. What would you want to know if you could? SAP HANA combined with creative minds can take your company where it hasn’t gone before.

Need more inspiration? For ideas on how different industries have used SAP HANA and related technologies, check out the Experience SAP HANA site. Get inspired by insights from experienced practitioners – or get involved and share your experience.


Learn more

Business Analytics Services from SAP can help you explore the value of SAP HANA for your organization or industry. Now bring SAP HANA’s revolutionary in-memory technology together with your most thoughtful and innovative minds – and watch the insights, performance, and productivity thrive.

For more information on SAP HANA, please visit us online or watch a video on Charite.

 

About the author

What makes Ralph Richter run: Discovering new places.

Coming from the healthcare industry, I joined SAP in 2002 and currently head the SAP EMEA Business Intelligence Competency Center. I have been working with Charité since 2009 on in-memory technology, but am intimately familiar with hospital settings from my previous work as a controller in a university hospital. As a registered nurse (RN) with a diploma in hospital management, I am able to support this great work at Charité from a uniquely blended healthcare and technical perspective. Today I am doing strategy consulting in the BI area for oil and gas companies, hospitals, and many others – helping customers “discover new places” with in-memory computing. As a nurse, I learned to treat and take care of patients. At SAP, I’ve learned how to solve our customers’ business problems, bring value to their operations, and make them happy. This is what makes me run.

After migrating to BW Powered by SAP HANA, customers will experience significantly less IT workload than they had before with a BW system running on a relational database. In the following I want to discuss some of the innovations of SAP NetWeaver BW Powered by SAP HANA that will help to reduce IT workload.

 

Simplified Administration

There are some technical artifacts that become obsolete after the migration to BW on SAP HANA. The most important ones with regard to IT workload are database indexes and InfoCube aggregates. Since most of the data is stored in columnar tables and the entire data is held in memory, no indexes need to be maintained anymore and aggregates become completely obsolete. The same applies to the maintenance of BWA indexes. BWA becomes obsolete which simplifies the landscape and helps to reduce IT workload.

 

Simplified Access to All Data
In the past, many operational data marts were implemented on different solutions in a system landscape. SAP HANA is the perfect solution to implement operational and agile data marts. This can be done on the same HANA instance where BW is running on. Features like BW workspaces and TransientProviders enable customers to consume operational data marts, created with the HANA Modeler, without replicating any data or creating additional persistence in BW. This will have a tremendous impact on lowering IT workload as it becomes much easier to combine different types of data marts. Since these operational data marts are implemented on the same platform, there is no need to extract the same data again into BW, creating additional persistence and rebuilding logic that has been implemented in operational data marts already. I recommend readers to take a look at Thomas Zurek’s blog in SDN, ‘How BW-on-HANA Combines With Native HANA,’ where these integration scenarios are described more in detail.

 

 

Data Model Changes Made Easy
A migration to SAP NetWeaver Powered by HANA will also have an impact on the way data models are created. Since all data is in memory and reporting on DSOs is very efficient, data models will become simpler with fewer InfoProviders and a leaner Layered Scalable Architecture. This means less workload for IT folks also with regard to changes in business requirements as the data models are much easier to adjust. Another point to mention here is the remodeling functionality of InfoCubes. Since InfoCube tables are columnar tables now, adding or removing fields to or from an InfoCube can be done in a very short time without reloading data.

From an IT workload and support perspective, it is also important to mention that the activation time of HANA Optimized DSOs is reduced dramatically, on average by factor 10-15. DSO activation time has probably been one of the main pain points especially with regard to managing update and load windows. Being able to deliver information to business much faster due to reduced DSO activation times and simpler data models is a big improvement and enables IT to meet SLAs even after a load has failed much easier.

As part of my role to drive SAP HANA for lines of business and industries, I have the immense privilege to meet with  many customers from different regions to help understand the new business value that can be derived from SAP HANA through existing as well as innovative use-cases. While the opportunities offered by SAP HANA to meet very specific business requirements are simply tremendous, one question that I’m always getting is on the specific use-cases that SAP supports through standard solutions delivered on top of the SAP HANA platform. So, I thought I would write this blog for the benefit of the entire HANA community to provide you with an overview on these new solutions.

In fact, this is an important point as SAP has an ambitious roadmap for so-called ‘Solutions powered SAP HANA’. SAP has actually already released several of these Solutions powered by SAP HANA for businesses and consumers. The common denominator of these solutions is to leverage the unique capabilities of SAP HANA to help link data insights in proper business process contexts and enable meaningful business action.

 

We distinguish these solutions leveraging the power of SAP HANA between two main categories:

 

Applications powered by SAP HANA

Applications provide a rich set of functionalities that addresses a company’s existing business processes and/or enable innovations for new processes and business models. A good example is the SAP Smart Meter Analytics application powered by SAP HANA. SAP Smart  Meter  Analytics enables  utility  companies to  turn  massive  volumes  of smart meter data into powerful insights and actions.  With this application, companies can instantly  aggregate  time  of  use  blocks and  total  consumption  profiles  to  analyze their  customers’  energy usage by what neighborhood they are in, the size of their homes or businesses, building type, and by any other dimension and at any level of granularity.

Centrica, a multinational utility company, is using it to identify high-use customers and drive efficiency programs.

Another beautiful example of innovative applications delivered by SAP is Recalls Plus. Recalls Plus application allows consumers to search recall history by brand or product category and easily share relevant recalls with others. Users can also create a personal watch list of items that may be relevant for their child; for example, the list could include baby monitors, school supplies, apparel, and other infant and children’s products. The application is already available to consumers in the U.S. via the Apple App Store.

 

Accelerators and Analytic Content powered by SAP HANA

Accelerator is software that utilizes the power of SAP HANA to dramatically improve the performance of existing SAP Business Suite functionality in well-defined areas that bring immediate value to customers. A good example is the SAP CO-PA Accelerator powered by SAP HANA. The SAP CO-PA Accelerator supports finance departments with real-time profitability analysis and reporting on large volumes of financial data in ERP.

One of SAP's customers went live with the CO-PA accelerator in only 8 weeks. SAP also released recently the SAP Finance and Controlling Accelerator and the SAP Customer Segmentation Accelerator.

In addition, SAP also delivers analytic content powered by SAP HANA. Analytic content is software that complements existing SAP applications and provides pre-built reporting, dashboards, and data models that run natively on the SAP HANA platform. A good example is SAP Sales Pipeline Analysis powered by SAP HANA. With SAP Sales Pipeline Analysis, sales departments can get real-time insight into massive volumes of pipeline data in CRM while performing on the fly calculations and in-depth analysis on any business dimension. Another significant example here is the ERP Operational Reporting with SAP HANA solution which helps accelerate reporting and analysis on large volumes of ERP data.

A great majority of these solutions powered by SAP HANA can be deployed as rapid-deployment solutions to ensure a quicker time to value. The rapid deployment solutions brings together software, best practices, and services for more predictability with fixed cost and scope editions. I recommend you to read the insightful blog from Suzanne Knopp on the rapid-deployment solutions for SAP HANA. It covers very well this topic.

To conclude, I would say that these innovative - yet non-disruptive – real-time solutions  powered by SAP HANA are really designed to help transform  the  way  organizations run  their  business from making smarter and faster decisions, reacting more quickly to events, unlocking new opportunities and even inventing new data-driven business models and processes simply not possible before. With these solutions, SAP continues to bring more choice to customers as you can leverage the standard solutions delivered on top of SAP HANA to meet specific business needs of lines of business and industries AND you can also utilize the platform to cover your own use-cases.


Many of these new solutions will be shown at the coming SAPPHIRE + ASUG Annual Conference in Orlando (14-16 May 2012). You will also have the opportunity to listen, meet and discuss with customers who have already implemented these solutions.

I hope to see you there !

 

In the meantime, and as usual, I welcome your comments and thoughts on Solutions powered by SAP HANA.

Based on the acceleration in the collection of new data flowing in through enterprises extending into mobile, cloud and social networks, during the past few years, big-data has become a leading topic in the IT industry. To put this into context, in 2011 1,200 exabytes of data were created… 8x the amount just in 2005! And this trend only continues with global digital content doubling every 18 months.

 

This environment creates both a stress, and a very large opportunity for enterprises. On one hand, companies can risk drowning in data if they do not invest in tools to manage and disperse relevant information in a timely manner. On the other hand, knowledge is power, and to have the ability to take massive amounts of data and analyze this information like never before is certainly a place where SAP’s HANA technology has only begun to scratch the surface for our customers. Designed to accelerate the reporting and analysis of large volumes of ERP data, SAP’s Rapid Deployment Solution for operational reporting has prepackaged content that contains pre-built HANA models and SAP BusinessObjects reports and Dashboards for ERP around sales reporting, purchasing reporting, financial reporting, shipping reporting, and master data reporting. The solution contains roughly 30 reports and dashboards in different SAP BusinessObjects tools, namely Crystal Reports, Explorer, Dashboard Designer or WebIntelligence. (In the documents attached to the blog you will find the details to the included reports and HANA models.)


As always in a rapid-deployment solution, you can profit from a fixed-scope and fixed-price service offering. The customer can pick 5 reports from the list of available reports and SAP consulting implements the solution in about 6-8 weeks. If a customer wants to add further reports this is of course possible with some additional effort.


This package is a clear starting point for SAP customers into the new HANA technology – customers can optimize their core ERP reports and get a direct feeling on how the HANA technology is empowering the existing landscape. The solution can be enhanced either by the customer themselves or by partners, building up additional ready-to-run reports or additional HANA models. So after beginning with the rapid-deployment solution, customers can then explore new use cases, based on very specific needs and situations.


To build up additional reports you need the HANA studio and a little bit of experience on how to work with it. If you want to realize SAP BusinessObjects reports you need of course knowledge on these tools.


To build new HANA models, you need knowledge on your data structure originating in ERP as well as knowledge on HANA data structure and how to use the HANA Studios tool.

News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.

Ask SAP HANA

Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions:

Χ