Home > Blog > Blog
1 2 3 48 Previous Next


715 Posts

The number of SAP Business Suite on HANA customers continues to grow rapidly and with it the number of new customer success stories. In her new article titled Over 1200 Customers and Growing, Lauren Bonneau Managing Editor at Wellesley Information Services sheds new light on 5 new stories from Sapphire Now 2014 in Orlando.


While there are many factors that influence the rapid adoption of any new product in the market the following three were clearly evident in the article: clear and compelling business value, lower total cost of ownership (TCO) and ease of deployment compared to existing market alternatives.


Lower TCO

I'll start with the lower TCO factor as this is the one that the article zooms-in on during the intro. The lower TCO of Business Suite on HANA compared to classical databases is indeed a consistent theme referenced by customers and a strong driver behind the product’s fast adoption.  The below 5 factors are constantly cited by our customers as contributors to the low TCO:


In the article, Unilever, Hidrovias and Pacific Drilling all talk about the simplified data footprints that they have experienced after moving to Business Suite on HANA while Mercedes AMG discusses how virtualization and Tailored Data Centre Integration allowed them to simplify the integration and operation of HANA within their own datacenter. Hidrovias do Brasil shared their cost analysis on how running Business Suite on HANA Enterprise Cloud will result in 70% savings over 5 years compared to on premise alternatives.For a more in-depth analysis of HANA’s lower TCO check out Forrester's Total Economic Impact study.


Business Value:

While lower TCO is a strong value proposition, it cannot by itself drive the rapid market adoption of a new product. There needs to be a clear and compelling business value. As is apparent from the stories and videos in the article, customers today look at Business Suite on HANA as a way to move towards becoming a Real-time Business.  In the case of Suzano that meant gaining real-time access to data in the general ledger, real-time reconciliation, and real time stock reassignment. In the case of Hidrovias do Brasil it was capturing real time logistics insights from their commodity transportation vessels in the waterways of Latin America. For Pacific Drilling it was the ability to capture real time insights form their deep water drilling vessels off the coast of Mexico, Brazil and Nigeria.


Beyond the immediate business value, all 5 customers saw Business Suite on HANA as an innovation foundation for the future of their business. Yoram Boradaty from Pacific Drilling summarized this nicely when he said “We see our move to SAP Business Suite powered by SAP HANA -as a ramp-up customer- as not just a best  approach to solve the problem of providing new insight to the business through real-time operational reporting and analytics, but it also positions the organization strategically for the future”.


Ease of deployment

The 3rd factor that was evident in the article was the ease of migration to HANA. Thanks to a strong and global partner ecosystem of certified Hardware and System Integrators,  Business Suite on HANA customers are able to go live with the solution within a short period of time and with no disruption to the business. In the case of Hidrovias that time was only 4 months and is expected to go down to 3 months for their future roll-outs. In the case of Suzano, the company only logged 6 hours of downtime related to HANA during the migration process while Mercedes AMG reported no disruption to the business after going live in April 2014 despite being the first company to go live with Business Suite on HANA in a virtualized productive environment.



Everyday now new customer stories, like the ones in this article, are emerging countering any remaining misperception about HANA lacking business value, being cost prohibitive or being too complex to implement. To the contrary, its for these very same reasons that customers are choosing HANA.

SAP is committed to continuously enhance the openness of SAP HANA providing its customers and developer community with ultimate amount of flexibility and choice. Openness helps customers achieve innovation, increase agility and lower TCO.  This year’s SAP HANA product news certainly reflects this commitment.


Since the beginning of the year, SAP has introduced a number of new hardware and software innovations across all layers of the SAP HANA technology stack, further extending SAP’s commitment to openness. Working closely with its partners, SAP has incorporated the latest technology advancements around microprocessors, memory, storage, networking, operating systems, and virtualization into the SAP HANA infrastructure.These innovations not only open up many new deployment choices for SAP HANA, they also deliver real business value to SAP customers through significant cost saving and dramatic performance improvements.


Here is a recap of the SAP HANA infrastructure innovations that we brought to the market this year:


  • On Feb 18th, Intel released a new generation of the Xeon family of microprocessors (E7 v2, also known as Ivy Bridge).  The new Ivy Bridge processors provide up to 50% more cores, have up to 4x times improved I/O bandwidth and can support up to 3x times more RAM than the previous generation of processors. SAP’s hardware partners were ready to deliver pre-certified SAP HANA Ivy Bridge appliances on day one of Intel’s announcement. Thanks to the increased clock speed of the cores and having twice the amount of available memory with new microprocessors, most customers have experienced dramatic (often doubled) performance improvements when running their applications on the new SAP HANA Ivy Bridge appliances. Larger memory sizes enable higher scale-up capabilities of single node systems.  For many customers, these higher scale-up capabilities eliminate the need to deploy scale-out configurations which can be more costly and more challenging to manage.

Today, there are more than 100 supported SAP HANA Ivy Bridge appliance configurations.  This number continues to grow with new configurations being introduced almost daily. Don’t forget to check the SAP Certified Appliance Hardware for SAP HANA  site for the latest, up-to-date list of supported HANA Ivy Bridge appliances.


  • Several important SAP HANA news announcements were made at this year’s Sapphire Conference:



SAP HANA on VMware provides new deployment architecture and allows for better utilization of the underlying hardware infrastructure by enabling multiple SAP HANA virtual machines to run on the same host appliance. Customers can also leverage VMware template cloning for fast and easy provisioning of new SAP HANA instances.  Zero downtime maintenance is enabled via VMware vMotion support, by allowing customers to dynamically move virtualized HANA instances to new HANA-certified hardware for scheduled maintenance. Finally, native HANA and VMware high availability capabilities can be combined for increased business continuity and to further streamline and optimize data center operations for customers who have adopted data center virtualization strategies.



SAP HANA on RHEL provides customers with an additional deployment choice. Customers that already run Red Hat in their IT landscape can now also deploy SAP HANA on Red Hat, thus leveraging existing skills and standardizing their business operations within the data center. The platform also provides military-grade security technology such as Security Enhanced Linux (SELinux), typically required by many companies in financial, government and other highly regulated sectors.


As of today, several SAP partners (e.g. Cisco, Dell, Fujitsu, IBM, and NEC) already have RHEL-based SAP HANA appliances available.  This list has been rapidly growing.  You can find the list of certified vendors and configuration information at SAP Certified Appliance Hardware for SAP HANA.


Many businesses today run their mission critical SAP applications on IBM Power Systems. SAP HANA on IBM Power provides our joint customers with a choice to preserve/extend their investments (in infrastructure, people, processes) on Power, while adopting next generation technology like SAP HANA to drive innovation in the future. This will further expand the deployment options for SAP HANA, providing customers who want to standardize on IBM Power with the option to deploy SAP HANA on their infrastructure platform of choice (when it becomes generally available).

An early adopter program that will allow customers to go live with single node, scale-up SAP Business Warehouse or ERP on SAP HANA on Power 7+ is planned for the end of November (in the context of HANA SPS09).


Also at Sapphire, SAP briefed its partners that it will further relax SAP HANA hardware requirements for non-production systems and extend SAP HANA appliance reference configurations for production systems.  SAP has made great progress on both of these topics since Sapphire. 


In case of SAP HANA non-production systems, hardware requirements have always been less stringent than the ones for production systems. For example, customers could deploy multiple workloads and consolidate their development and test systems by sizing the non-production systems to use a 2x times higher core-to-memory ratio than the one used for production systems.  Last month, SAP made additional steps to further relax hardware requirements and lower cost for SAP HANA non-production systems:


  • The low-end family of E7 processors (e.g. Westmere EX Xeon E7-x8xx or IvyBridge EX Xeon E7-x8xx V2) are now allowed in non-production systems and there are no more restrictions for core-to-memory ratio. For example, customers can over-provision and use as much main memory as they want if they are willing to accept the penalty of lower performance on their development/test systems.
  • The minimum storage-to-memory ratio requirement has been relaxed from 5x times to 2x times for non-production usage
  • Storage KPIs are no longer enforced in non-production systems.  For example, customers can use any local storage or shared storage with standard disks.


As for the SAP HANA production systems, several new reference configurations are now being approved for SAP HANA Ivy Bridge appliances:


  • The  list of supported Ivy Bridge configurations has been enhanced with additional HANA “T-shirt” sizes (such as the addition of 2 socket - 384 GB, 512GB; 4 socket – 768GB; 8 socket – 1.5TB for OLAP scale-up;  2 socket-512GB for OLAP scale out). New configurations with more granular memory sizes have also been added for OLTP loads, including 4 socket -3 TB, 6TB, and 16 socket configurations ranging from 128GB to 12TB).

These more granular memory options and larger memory sizes will not only provide more choices for SAP HANA customers, they will also provide significant cost savings by making it easier for customers to buy only as much memory as they need. In addition to this, the ability to scale-up to 12TB on a single node will enable many customers to remain on their single systems longer before they have to scale-out, reducing TCO.


  • SAP has also relaxed the storage requirements for single node, scale up systems.  The storage requirement, “The storage media itself needs to be protected from failures,” can now be achieved with one single SSD card in the system. This will further reduce the cost of SAP HANA entry-level systems.


In summary, the SAP HANA ship is steadily moving deeper into the sea of openness.  We are constantly adding new hardware and software choices that provide customers with increased flexibility and cost benefits when deploying SAP HANA into their Data Centers. Stay tuned for SAP news on the many new and exciting innovations that are coming very soon…



Related Readings:


Suite on HANA, Simple Finance, and a really cool explosion video as a bonus


Ever since returning from SAP SAPPHIRE in June, I wanted to write this blog. Reason being is that I got so excited to see the official announcement of both SAP Simple Finance and the HP Converged System 900 for SAP HANA pretty much in the same keynote. And why the excitement? Because it brings together


  • a tremendous additional business value proposition for Suite on HANA and
  • the largest scale-up HANA appliance with up to 12TB of RAM for truly mission-critical deployments like ERP and CRM.


And this, in my humble opinion, changes the cost-benefit ratio for migrating a customer’s SAP ERP system to HANA so drastically that we will see increasing HANA adoption in this key HANA segment. But what makes me so certain about this? Read on to find out!



No more aggravating aggregates


While SAP HANA has improved the traditional SAP Business Suite applications in many areas like Materials Requirements Planning (MRP) and Operational Reporting, improvements so far have been based on targeted optimizations of existing SAP programs for SAP HANA. With Simple Finance, SAP has for the first time fully rewritten and refactored a major Suite component in its entirety and has been able to take full advantage of the power of SAP HANA.


The original trick for Simple Finance was to remove aggregates and the materialized tables they are stored in. Pre-HANA, they were critical to increase system response time – but had the side effect of increasing system management overhead while decreasing business agility. With SAP HANA, these tricks are no longer needed since reporting against the line-item data is extremely responsive and the additional load is highly affordable.


Hasso Plattner explains this nicely in his recent blog, “The Impact of Aggregates,” which make for some interesting reading. He also touches on a situation that I have seen in the past: These oh so rare changes in organizational structures, aka YAR (Yet Another Reorg). So what happens in a traditional SAP Finance system in a management-driven reorganization? Here’s a sampling:


Reorgs entail changes in reporting relationships as well as product ownership. And while reporting relationships are simple lines in PPTs, they have very complex permutations when you look at compensation, bonus pools and cost allocations for overhead. It gets even more complex looking at the product/services side and accounting for cost of goods sold, sales and revenue, all represented in cost centers, profit centers or regular accounts as part of your company ledger and/or sub ledger. For the finance department to manage and report on all these data points in a responsive manner, aggregates for revenue-per-product-line or compensation-per-department, for example, were stored in materialized tables in traditional finance applications before HANA.


Now, when a reorg happens, several or all of these assignments of people, material cost, and revenue to cost/profit centers are changing. To complicate the matter, the active date of the changes may be in the past or future and/or the business would like to simulate the proposed changes before committing to them. And while SAP has built a process called Profit Center Reorganization to facilitate such reorgs, a quick look at this help page describing the necessary steps will give you an appreciation of the limitations and the prohibitive complexity of such a reorg.


Per Hasso Plattner, “the simplification is huge and makes the whole accounting system so much more flexible.” Having been part of re-org projects in the past, I could not agree more.


In short, SAP Simple Finance is a true game-changer for SAP customers and will make the business benefits of HANA highly tangible for business stakeholders. But how about the other two discussion points in every Suite on HANA conversation:


  • Sheer size of the ERP database, and
  • The mission critical nature of ERP systems requiring 24x7 availability?


It’s true, bigger is better!


Most SAP customers have started with moderate database sizes when first implementing SAP ERP. Over the years, with growing worldwide and functional footprint, acquisitions, plus years of historic data, ERP database sizes have grown into the double digit TB range. At Hewlett-Packard for example, one of the core SAP ERP production systems has the massive volume of 40TB.


While SAP HANA is able to scale-out linearly for analytic use cases and it is well understood how to build large landscapes based on smaller interconnected nodes, SAP does not yet support this for Business Suite on HANA. That means that the only possible and supported way to implement Suite on HANA is by leveraging a single scale-up system configuration. Let’s consider for a minute the HP example with 40TB of uncompressed ERP data. It is obvious, that this single system requires a massive amount of RAM, so let’s do some math:



HP’s ERP system size


Typical HANA compression ratio for ERP


Compressed HP ERP data volume


HANA Appliance RAM allocated for ERP data versus working RAM


Theoretical size of HP’s ERP on HANA system




As the calculation shows, HP IT will theoretically require a single HANA appliance with 16TB of RAM! Since such a large HANA appliance does not exist, the HP IT team is well underway with system archiving activities to tame the database volume and make a transition easier. So how much data will they have to shave off to fit into a single SAP HANA appliance?


SAP and HP co-innovation: the out-sized scale-up businesses need


In early 2013, SAP challenged HP to build the largest available scale-up HANA appliance for mission critical customer environments. HP’s mission critical server team took the challenge and the two companies teamed up for deep co-innovation. The result: In June 2014, after diligent joint engineering and testing, HP announced HP ConvergedSystem 900 for SAP HANA (CS900) during Bernd Leukert’s SAPPHIRE keynote.


The breakthrough CS900 HANA appliance, based on the latest Intel architecture with 16 XEON E7 (Ivy Bridge-EX) CPUs, is the only system in the market that provides 12TB of RAM. This compares to only 6TB of RAM by the next largest competitor system. And best of all, CS900 solves HP IT’s challenge of migrating their 40TB system into SAP HANA (after a little house-keeping as mentioned earlier).


While I am discussing the 12TB scale-up solution in this blog, the picture below shows other available configurations as well as the key common components. Please note that this is only the beginning of the product roadmap. There are more versions of CS900 to come.


CS900 family picture.jpg

Picture: System family for HP CS900 for SAP HANA




For when failure is not an option


Compare a company’s ERP system to the cardiovascular and nervous system of the human body. Failure of these systems leads to life-threatening situations frequently. So can an extended ERP system outage for a corporation. Therefore, any company considering moving their ERP system to a new platform has to be careful to ensure the new platform provides high availability and disaster recovery. To put it in other words: business continuity! So, how does HP CS900 stack up to this requirement?


The 12TB scale-up solution is built on Superdome 2 and ProLiant technologies which are based on legendary Tandem NonStop fault tolerant systems, originally built for banks and stock exchanges. The high fault tolerance of Tandem systems in the past as well as the HP CS900 for SAP HANA now, are based on high levels of RAS (Reliability, Availability and Serviceability) protecting data integrity and allowing for long periods of un-interrupted system uptime without failure. These high RAS levels are achieved by built-in system redundancies and complex system self-check and self-healing mechanisms – and are key for minimizing risk of system failure for a mission critical SAP ERP system. But unfortunately, even the best systems can fail due to individual components or outside forces. What then?



But what happens when your datacenter explodes?


SAP HANA has come a long way and since HANA SPS6 provides all required high availability and disaster recovery (HA/DR) capabilities for failing over single nodes or entire systems either locally or to secondary data centers. However, these failover scenarios require a certain level of human intervention when relying on native SAP HANA capabilities alone. I think this is the reason why expert opinions are split on the topic of HANA’s data center readiness. Here is my take:


SAP offers mature and flexible data replication technologies, however, failure detection and automatic resolution is available via complementary third party solutions from all major system vendors for their respective HANA offerings. Nevertheless, the capabilities and maturity of these offerings differ and the broad majority of vendors only offer storage based replication. The only two system based replication vendors, complementing and working on top of the above mentioned SAP replication technologies, are HP ServiceGuard and SUSE Cluster (in beta). They work directly on the in-memory layer within SAP HANA.


A full comparison between both approaches is beyond the scope of this blog but I hope I brought across the point that there is a large variety of HA/DR solutions available in the market, and a mission critical Business Suite on HANA can be operated in a highly risk-mitigated fashion. Additionally, the following table shows a high level comparison between the two major approaches for HA/DR.




Storage Replication

System Replication


HP, IBM, Hitachi, CISCO, Dell, Fujitsu, NEC, VCE, Huawei, Lenovo (China only)

- HP ServiceGuard

- SUSE Linux Cluster (in beta)

Supported HANA use cases

- Scale-up

- Scale-out

- Scale-up

- Scale-out (only HP)

Replication strategies

- Synchronous

- Asynchronous

- Synchronous

- Asynchronous

Bandwidth requirements


-  Replication of partial transactions results in costly roll-backs when transaction is cancelled, e.g. due to failure


- No transmission of cancelled transactions; replication only after full commit

- Sync mode: only log files are transferred continuously with transaction commit, rest async driving lower bandwidth

Disaster recovery (*)

- Performance optimized:

- Cost optimized:


- Slow

- Slow


- Fast

- Medium


Hardware vendor dependent

Infrastructure agnostic

Additional capabilities


- Zero downtime management (aka NetWeaver connectivity suspend)

- Cascading multi-tier system replication (only HP)

Key roadmap capabilities


Active/Active Operation (read only reporting on secondary fail over site)


Performance optimized

- Secondary system completely used for the preparation of a possible take-over

- Resources used for data pre-load on secondary

- Take-overs and Performance Ramp shortened maximally

Cost optimized

- Operating non-prod systems on secondary

- Resources freed (no data pre-load) to be offered to one or more non-prod installations

- During take-over the non-prod operation has to be ended

- Take-over performance similar to cold start-up




A picture (or video) is worth a thousand words


The question remains, how do these large scale mission critical systems behave in a live disaster recovery situation? Let’s have a look at this video showing exactly such a fail over process from a primary to a secondary data center. Arguably, the circumstances are manmade but well, close enough to a real disaster situation in my opinion. But see for yourself please.




Video: HP Disaster Proof Solutions







Cool stuff, no?


I hope you can see the business potential of SAP Business Suite powered by SAP HANA with solutions like SAP Simple Finance – as well as a very feasible, low-risk path toward implementing this solution within your company. I personally believe that this solution is game-changing. Not only does it improve existing company processes like finance or MRP greatly, it also offers vast opportunities for moving your company to an entirely new level. Here is one final example:


Also in June at SAPPHIRE, I met a mid-size German producer of industrial equipment (from the so called “Mittelstand”) which is migrating all its business systems to SAP HANA. This is all part of the company’s goal to transform itself from a product-centric company to a services-centric company – one that leases its products to customers and complements them with rich services focusing on predictive maintenance. Think the “Internet-of-Things” (IoT).


With SAP HANA and Business Suite on HANA on a platform for mission critical operations, every SAP customer has the opportunity to up his game, increase business, and IT efficiency and expand into entirely business and solution areas. And this is great!


Thanks for reading and I am looking forward to your comments. - Please also have a look at this technical brief for HP ConvergedSystem 900 for SAP HANA for finding out more.





PS I thought this SPECjbb2013-MultiJVM benchmark may also be worth a look …


PPS A word about myself: I have been part of the SAP HANA journey from the early days as an employee of SAP Labs in Palo Alto. I recently (re-) joined Hewlett-Packard as the SAP Chief Technologist within HP’s SAP Alliance organization and am glad that I am still working on this exciting topic




In recent quarters, SAP has announced Tailored Datacenter Integration (TDI) and Virtualized SAP HANA to provide production deployment flexibility and better adaptation of existing and established data center processes and infrastructure standards.  These recent innovations help SAP HANA customers to realize better return on investment (ROI) and reduce total cost of ownership (TCO).


To further lower TCO and accelerate already breathtaking adoption rate, SAP has recently relaxed SAP HANA infrastructure requirements for non-production usage, such as development and testing, in which performance is not focus of these tasks.   What does this actually mean to SAP HANA users including enterprise customers and partners?  It will further alleviate budget constraints faced by all enterprise customers today by enabling majority of commodity infrastructure components available for SAP HANA non-production usage, either in a virtualized or bare metal environment, without certification.  It will help SAP HANA customers to radically accelerate rate of innovation.


My colleagues Adolf Brosig, Zora Caklovic and I would like to highlight following:


  1. CPU: Starting with lower end of Intel E7 family of processors
    • Intel Ivy Bridge v2 CPU and related chip sets are now certified with SAP HANA, at clock speed of 2.8 GHz with turbo speed of 3.4 GHz, for production usage.  However, earlier generations of processors in Intel E7 family with substantially lower cost, such as Westmere EX (E7-x8xx) or Ivy Bridge EX (E7-x8xx V2), are now also available for non-production usage.   Customers now have much more flexibility and are able to re-use some existing infrastructure components for software development and testing with SAP HANA.
  2. Memory: Maximum main memory available in a host machine
    • With Intel Ivy Bridge v2, for production analytical workload, SAP has been recommending that the ratio between main memory and CPU needs to be 256GB per CPU.     This ratio is not going to be enforced in a non-production environment.  SAP HANA customers can use as much main memory as possible in a host machine for any development or testing tasks, in which performance is not focus of these tasks.
  3. Storage: Any storage including RAID 0 on proven file systems
    • As performance KPIs are not enforced for non-production usage, SAP HANA users can use any local or shared storage with standard disks, even with RAID 0 is supported.  Minimum storage to memory ratio requirement is also relaxed from 5x for production to 2x or even less for non-production usage.  Also, any proven File System, such as XFS, NFS and GPFS (IBM only), can be used for log and data volumes in a non-production environment.
  4. Network: Any standard networking components
    • In a typical production environment, 10Gbit/s or even more powerful connections is required for performance optimization.  For a non-production environment, any commodity networking components can be used.   Customers will likely be able to re-use existing network infrastructure for these non-production tasks.




relaxed SAP HANA HW Req.png



We are in an exciting phase of SAP HANA revolution.  Besides certified SAP HANA appliance, customers have much more flexibility with TDI and Virtualized environment for production usage and simplified infrastructure requirements for non-production usage.  All of these innovations will further lower TCO and accelerate SAP HANA adoption.


Disclaimer:  SAP HANA configuration for production environment will only be provided on certified appliance listed in reference section below.



  1. SAP HANA Virtualized:  SAP Note 1788665
  2. Certified SAP HANA appliance configuration on Intel E7 V1 (Westmere) processors.
  3. Certified SAP HANA appliance configuration on Intel E7 v2 (Ivy Bridge) processors.
  4. Certified SAP HANA Enterprise Storage Hardware supported under the HANA Tailored Data Center Integration (TDI) program.
  5. Recommended Customer Price for Intel Xeon Processor E7 v2 Family
  6. Recommended Customer Price for Intel Xeon Processor E7 Family

This truly wowed me and really had an impact on my perspective around how Big Data can change the very essence of how we approach things.


During my regular routine of scanning news articles over a coffee before settling down to real work the other day, I read a Time article on the first of the televised debates around Scottish independence. It highlighted what was said the night before; who won based on a poll, what was done right, and what was felt each side had to adjust going forward. Standard post analysis stuff I have fully come to expect. (read it here) The next day on Reddit I came across a timelapse visualization of sentiment based Twitter data captured during the debate. It totally blew me away and fundamentally changed the way I looked at things in terms of Big Data. I thought, OMG, holy bat droppings batman, this changes everything! [Yes I am that old and know Adam West as more the mayor of Quahog]. Suddenly I saw potential for definite impact scrolling before my eyes; the need and use case for both for real-time and post event analysis.

twitter blog1.png

I thought, you can see the ebbs and flows of the yes and no sentiment as the debate progresses. (view it here) And I envisioned the potential and need for real time analysis. In real-time during the debate you can see the sentiment shift and with this real-time feedback, you can on the fly, make in the moment adjustments and alter emphasis. The support team viewing this real-time data can coach the speaker via an ear piece, iPad on the podium, hand signals or sign board in the front row – hit them again on that – go back to – avoid this – focus on that. Kind of the way F1 drivers, speed skaters and cyclists in a velodrome get in race advice from their support staff so they can make in race adjustments. It reminded me of a story on the use of Twitter presented at a peer group I was in some 5 years ago. In brief during a tradeshow someone in the office was monitoring Twitter and when somebody tweeted they were not the grasping the concepts from a presentation, this was conveyed to staff on the floor. They located the person, spent some one on one time, clarified the concepts and got a subsequent tweet praising the company. So the concept is not new just the scale, volume, velocity and automation of the process. And to think Twitter is just 8 years old. Of course you need technology capable of dealing with the volume and velocity of the data stream, and able to do real-time analysis on it.


Further, I recognized the need for, and value of, this information in post event analysis. You can see at what points during the debate yes or no sentiment surged and receded along with the size of the response graphically represented on a map. Not only can we correlate this to what was being said by whom at that point in time in the debate, but knowing the geographic location we can correlate it with a lot of other data. What are demographic of the area? What is the median age, level of education, income level, proportional type of occupation, religious adherence, political affiliation, level of charitable donation, average political donation and to whom, population density, marital status, divorce rate, number of out of country vacations, crime rate, average spend per shopping trip and even handedness, favorite beverage or most popular car color – you never know what kind of correlations will pop up. All this means complex, ad hoc, iterative work on massive data sets along with the Twitter and geospatial data. Sounds great and provides a lot if insight and drive future strategy and action. Of course you will need a solution capable of not only real-time analysis, but also able to handle complex queries on massive, diverse data sets, with a geospatial engine, natural linguistic capabilities, suited for ad hoc queries and with rapid response time to allow for iteration and what if analysis. And of course all this in a simplified manner, without a lot of complexity and fuss. Sounds a lot like what the SAP HANA platform offers along with partners such as Esri and intuitive visualization tools such as Lumira.

  • Learn how the
         SAP HANA Platform for Big Data can help you take a simplified approach to
         making Big Data work for you in deriving true value from Big Data for your
         business at www.sap.com/bigdata

This week, I read a survey from the Americas SAP User Group (ASUG) about the adoption of SAP HANA. ASUG is an important institution for SAP and our customers. We consider it to be a powerful voice for our technology users - one we take very seriously and continue to support. There is no reason for SAP to exist, save for our customers and their success, and that’s why its important to shed light on the survey and what it means for our customers.

According to the survey data, we have a high percentage of customers who haven’t yet purchased SAP HANA and cite their lack of use case understanding for the technology. Fair enough. SAP can always do more to ensure customers understand the opportunity SAP HANA gives them to reduce costs and simplify/accelerate their business processes. I get the intent of the question but asking people who don’t own HANA what they think of it, is like asking people who have never driven a car what they think about the speed and handling capability of a Porsche. On the other hand, Unilever who has deployed HANA has this to say, “The best thing that SAP HANA at Unilever enabled was face-to-face discussions with our functions to really investigate new business scenarios, new business processes, and to drive into new territories. Besides the acceleration, the user experience improved significantly. We are asked by multiple users for more and more HANA-empowered solutions. It was probably the first time in my role that I really delighted the end-users and made their life easier." - Thomas Benthien (sp), Global IT Director at Unilever.

One set of data I found most interesting was that nearly 75% of SAP customers who said they have no plans to implement SAP HANA at this time believe that SAP will support their existing environments in the future or for at least five years or more. To me, this feels like a great thing. Customers don't feel forced or pressured to do something. That's the antitheses of what people perceive software companies to be. Furthermore it’s the right thing to do – support our loyal customers and partners who invested in SAP a decade or two ago vs. only focusing on the here and now. That's why people choose SAP.

For customers who are maintaining their legacy environments - while SAP will continue to support this, the marketplace is less forgiving. big data, IoT, cloud, and mobility have converged to create a business environment that is changing at an incredibly rapid pace. Bottom line - HANA is the bridge between rapidly changing technology, fast-moving market trends, and businesses that must continue to generate revenue in a constantly moving marketplace.

For customers who ARE using HANA, nearly half of all the respondents said they found significant ROI in areas like simplification and cost reduction. This means we are winning the battle against complexity by helping customers run simpler. We just need to work harder to clear the path for the rest of our customers. We are just beginning to explore the impact HANA is having on business transformation goals, increased revenue generation, and time and resource savings. Survey results on this type of business impact could prove to be the tipping point that makes HANA a complete no-brainer.

I learned a lot by reading the survey. I learn more every day by speaking directly with our customers. When I hear them tell me how challenged their businesses are by the weight of legacy, complex enterprise software and hardware systems, it strengthens my belief that HANA is the way forward. Clearly, companies want to run simple. In fact, our CEO, Bill McDermott, has said many times that “Run Simple” is the new era in enterprise computing. It turns out he is right and so are our customers.

I have a key takeaway and an invitation. The takeaway is that our customers need us to step up and help them solve the big problems across their technology landscape and business processes…but do it in a way that's easy to understand with clear ROI and use cases. We are fully committed to this.

The invitation is for all customers who want to transform their business and capitalize on the opportunities that have been opened up through technology innovation, big data, IoT, cloud, and mobility…deploy HANA. Email us, tweet us, call us…we will help you find the right use case to grow your business faster than your competitors and lower your costs at the same time. If you need an environment to evaluate HANA, we can help you with that too! Join us and we WILL help you succeed!


Access the below links to find more than 1,000 use cases, customer references/stories, and deployment scenarios showing lower TCO.



At a recent ‘Big Data for Industries’ roundtable at our UK headquarters, my colleagues and I discussed how different sectors are challenging convention to capitalise on the opportunities offered by data analytics. Many interesting points were debated, so I wanted to take the opportunity to share some of the key take-aways from our discussion through this blog post.


In today’s data-driven economy, businesses across industries are harnessing big data analytics to cope with the economic pressures they face, innovate through new services, and to gain a competitive edge. Financial services, scientific research and retail are prime examples of sectors where data analytics is propelling fundamental changes in how things are done and generating long-term positive outcomes for organisations and the society as a whole.


In financial services, big data is freeing up businesses from the constraints of years of data hoarding by simplifying analysis and enabling more meaningful insights. This enables organisations to  sharpen customer service, liquidity risk management, business predictions, claim management, fraud detection and compliance.


For example, a leading health insurance company uses SAP HANA to analyse data for more than six million hospital cases each year to identify potential health risks to its clients, improve patient care, and offer preventative measures that extend life.


One of our banking customers uses SAP HANA for profitability analysis across 31 markets in Europe. Before, the company ran its analysis on a monthly basis, covering one market and one product only. This took hours. Thanks to SAP HANA’s in-memory capabilities, the company is now able to analyse all products and 31 markets simultaneously, drill deep into individual customers, and to run the analysis on demand, in a matter of minutes. This has had a fundamental impact on the business, making it more agile and enabling faster decision making.


While many banks have embraced technology, a recent study by SAP and a group of leading academic institutions found that more needs to be done to close the gap between regulators’ expectations and banks’ ability to meet compliance and reporting requirements. The study identified mobile, in-memory computing and cloud as the biggest technology trends to shape banking in the future.


Big data is helping to advance people’s understanding of the planet’s biodiversity too. SAP is currently working with the International Barcode of Life (iBOL) project to help accelerate the building of a database containing DNA barcodes for every species on the planet. The database hosts more than 400,000 species at present, but to identify all the species on the planet (estimated at between 10 million and 100 million), iBOL is looking to expand the number of people contributing to the research. SAP and iBOL have developed the LifeScanner app, available on iTunes, to crowdsource the collection and analysis of DNA barcodes. Anyone can use the app to collect a tissue sample or whole organism, send it off for analysis and get a species identification using DNA barcodes. The published data will be made available to researchers and students for analysis through SAP HANA and SAP Lumira.


Data analytics will soon be used to tackle fraud in the food supply chain too, as mislabelling continues to plague the industry. It can be hard to tell apart dry oregano and basil, for example, because certain herbs and spices can look very similar after processing and packaging. To help address this challenge, SAP and Tru-ID are exploring solutions to increase visibility in the supply chain using SAP HANA and DNA-based verification testing.


In retail, big data is both a necessity and a strategic advantage, and data analytics plays a key role at three levels: store placement, assortment strategy and customer engagement.


Think about a high-end clothing store at an airport. At the top level, thanks to big data, the location of the store is not random – it’s close to a gate with a high proportion of affluent passengers to maximise sales. At the second level, the store uses data to choose the products sold based on the type of shoppers that walk through the doors at different times of year – for example ensuring that the store is stocked with children’s t-shirts during school holidays. Drilling deeper, the third level is how the store uses data on its customers and its stock to sell more effectively. This could include making recommendations based on what the customer bought on their last visit, or even what the weather is like in their destination.


Big data analytics is empowering retailers to meet the expectations of their increasingly discerning customers, who expect a connected shopping experience – both online and offline. The popularity of online shopping means that consumers are used to being given recommendations based on their previous purchases, and receiving discount codes to thank them for their loyalty. Data analytics enables retailers to join the dots across their online, offline, mobile and social channels to create a real time 360° view of their customer and provide a better omni-channel customer experience.


Banks, scientists and retailers are leveraging big data to simplify how they operate, enable faster decision making and to serve their customers more effectively, demonstrating the growing value of data across industries. It’s all about challenging convention and asking yourself, “Is there a better way of doing this?” For organisations wanting to turn the masses of data they gather into meaningful, actionable insights, the opportunities are endless.


Twitter: @i_kHANA

This is yet another question that I get from all angles, partners, customers but even colleagues. BW has been the spearhead SAP application to run on HANA. Actually, it is also one of the top drivers for HANA revenue. We've created the picture in figure 1 to describe - on a high level - what has happened. I believe that this not only tells a story on BW's evolution but underlines the overall HANA strategy of becoming not only a super-fast DBMS but an overall, compelling and powerful platform.

Fig. 1: High level comparison between a classic BW and the two versions of BW-on-HANA. Here as PPT.

Classic BW

Classic BW (7.3ff) follows the classic architecture with a central DBMS server with one or more application servers attached. The latter communicate with the DBMS in SQL via the DBSL layer. Features and functions of BW - the red boxes in the left-most picture of fig. 1 - are (mostly) implemented in ABAP on the application server.

BW 7.3 on HANA

At SAPPHIRE Madrid in November 2011, BW 7.3 was the first version to be released on HANA as a DBMS. There, the focus was (a) to enable HANA as a DBMS underneath BW and (b) to provide a few dedicated and extremely valuable performance improvements by pushing the run-time of certain BW features to the HANA server. The latter is shown in the centre of fig. 1 by moving some of the red boxes from the application server into the HANA server. As the BW features and functions are still parameterised, defined, orchestrated from within the BW code in application server, they are still represented as striped boxes in the application server. Actually, customers and their users do not note a difference in usage other than better performance. Examples are: faster query processing, planning performance (PAK), DSO activation. Frequently, these features have been implemented in HANA using specialised HANA engines (most prominently the calculation and planning engines) or libraries that go well beyond a SQL scope. The latter are core components of the HANA platform and are accessed via proprietary, optimised protocols.

BW 7.4 on HANA

The next step in the evolution of BW has been the 7.4 release on HANA. Beyond additional functions being pushed down into HANA, there has been a number of features (pictured as dark blue boxes in fig. 1) that extent the classic BW scope and allow to do things that were not possible before. The HANA analytic process (e.g. using PAL or R) and the reworked modeling environment with new Eclpise-based UIs that smoothly integrate with (native) HANA modeling UIs andconcepts leading also to a reduced set of infoprovider types that are necessary to create the data warehouse. Especially the latter have triggered comments like

  • "This is not BW."
  • "Unbelievable but BW has been completely renewed."
  • "7.4 doesn't do justice to the product! You should have given it a different name!"

It is especially those dark blue boxes that surprise many, both inside and outside SAP. It is the essence that makes dual approaches, like within the HANA EDW, possible, which, in turn, leads to a simplified environment for a customer.


This blog has been cross-published here. You can follow me on Twitter via @tfxz.

The all new SAP HANA Marketplace was unveiled recently with its consumer-grade online shopping experience and capabilities. The new site features intuitive navigation, search & filter, product comparison capabilities as well as a responsive website for mobile devices.


Another key feature unveiled in this version is for product owners at SAP and its partner companies, including the innovative SAP HANA startups, to publish their products and manage their own store fronts. As HANA Marketplace is gaining momentum, we are receiving an increasing number of inquiries from the product teams for listing their products on the SAP HANA Marketplace and seeking guidance for providing best online experience for customers.


Find below a list of Top 5 best practices that the HANA Marketplace team has put together to help product owners tap into new sets of customers globally effectively:

  1. Offer Free trials
    • Free trials are the most popular offers on the HANA Marketplace. It is important for prospective customers to test drive the solution before making a financial commitment. Example – SAP Customer Engagement Intelligence (CEI) at  http://marketplace.saphana.com/p/3525
  2. Offer Subscription pricing
    • Ideally month-to-month pricing and ability to cancel at any time, think about a price point that is in range with a credit card limit. Example – Decisions by Liquid Analytics at http://marketplace.saphana.com/p/3546
  3. Offer a Starter Package 
    • Prospective customers usually want to start small and ‘kick the tires’ before making a large financial commitment. Think about a price point which will require minimal approvals and reduces barriers to decision making. Example – SAP Sales and Operations Planning (S&OP) Starter Edition at  http://marketplace.saphana.com/p/3555
  4. Make Comprehensive Collateral Available Online
    • Include datasheets, brochures, FAQ, demo videos, customer testimonials….anything to make customer more knowledgeable about your product via e-learning with the aim of eliminating classroom trainings and support calls. Example - SAP HANA App Services offering of the SAP HANA Cloud Platform at http://marketplace.saphana.com/p/1808
  5. Offer Immediate Purchase and Deployment
    • Customers want instant gratification today. Have a “Buy Now” call-to-action rather than “Contact Us”. Also, make the software available for immediate download and provision the hardware immediately in the cloud. The products should ideally have wizard-based configuration and in-app support capabilities. Example – SAP HANA Cloud Platform Starter Edition http://marketplace.saphana.com/p/1805


Related Links:

New technologies and enterprise solutions for marketing seem to emerge every day.  How can you find the right one? This infographic will help.  Put on your Sherlock hat and start searching!


STEP 1 Observe Your Surroundings

One of the best ways to find the right IT solution is to ask your users what they are looking for.  In case of marketing, simply ask your marketing team a question - what objectives they are trying to accomplish with the new technology?


Chances are they will tell you that identifying target audience and engaging with customers deeply are two of their most pressing objectives.  Why? Because the rules of engagement have changed – the customers today are in charge more than ever. They are digitally connected, socially networked and better informed. In fact, according to Customer Executive Board as many as 57% of B2B customers now make their buying decision before you are even aware of the opportunity[1].


STEP 2  Gather Your Facts

How can you help the marketing team to engage with prospects before they make their buying decision? You know that potential customers reveal their needs and wants trough web site interactions, social media conversations, prior purchases, etc., in other words, by leaving a trail of BIG DATA.

Your marketing team will need a solution that could analyze the data in real-time and will tell them what customers want so that marketing can:

  • Segment and target the right people at the right time with the right offers
  • Invest the right resources in the right customers, products, channels
  • Enhance customer engagement and loyalty.


STEP 3 Draw Your Conclusion.

Test SAP Customer Engagement Intelligence powered by SAP HANA today. It brings together the data to deliver the insights your sales and marketing teams need for compelling engagements with customers and prospects. Tools for customer analytics, contact intelligence, and customer segmentation and targeting can help these professionals execute your business strategies more effectively.  Start your 3-day free trial today.





[1] Source – Customer Executive Board @ http://www.executiveboard.com/exbd-resources/content/digital-evolution/index.html

Lots of people think of those questions in terms of black/white, good/bad, legacy/new. As in many, many situations in software and real life, it usually is a trade-off decision. This blog attempts to clarify the differences in both approaches. Ideally, this leads to a less ideological and biased but factual discussion of the topic. Furthermore it should become apparent that the HANA EDW approach allows to work along both approaches within the same (HANA) system. So it is not the case to have - for example - one data warehouse system based on BW (managed approach) and a second, purely SQL-based (freestyle approach) data warehouse system on an RDBMS.

Fig. 1: Two different approaches: managed vs freestyle data warehousing.

Fig. 1 pictures the two different approaches:

  • On the left-hand side, jigsaw pieces represent various tools that are harmonised, meaning that their concepts / objects "know" each other and have the same lifecycle. For instance, when a data source(as an example of such an object) gets changed then the related data transformations (further examples of such objects), cubes, views, queries, ... (i.e. other objects) can be automatically identified, in the best case even automatically, adjusted or at least brought to the attention of an administrator so that the necessary adjustments can be manually triggered.
    Another example is BW's request concept which manages consistency not only within the data warehousing (data management) layers but is also used in the analysis layer for consistent reporting. It's a concept that spans many tools and processing units within BW.
    The individual tools to build and maintain such a data warehouse need to understand and be aware of each other. They work of the same repository which allows one consistent view of the meta data that consitutes the organisation of the data warehouse. When one object is changed, all related objects can be easily identified, adjusted and those changes can be consistently bundled, e.g. to apply them in a production system.
    Due to the dependencies, the tools are integrated within one toolset. Therefore, they cannot be replaced individually by best-of-breed tools. That removes some of the flexibility but at the benefit of integration. SAP BW is an example of such an integrated toolset.
  • On the right-hand side, you see similar jigsaw pieces. On purpose, they are more individually shaped and do not fit into each others slots from the very beginning. This represents the situation when a data warehouse is built on a naked RDBMS using best-of-breed tools, potentially from various vendors. Each tool only assumes the presence of the RDBMS and it's capability to process (more or less standard) SQL. Each tool typically comes with its own world of concepts and objects that are then stored in a repository that is managed by that tool. Obviously, various tools are needed to build up a data warehouse, like an ETL tool for tapping into source systems, a programming environment - e.g. - for stored procedures that are used manage data flows and transformations, a tool that monitors data movements, a data modeling tool to build up analytic scenarios etc. Technically, many of those tools simply generate SQL or related code. Frequently, that generated code can be manually adjusted and optimized which provides a lot of freedom. Per se, the tools are not aware of each other. Thus their underlying objects are independent, meaning with independent lifecycles. Changes in one tool need to make it "somehow" to the related objects in the other tools. This "somehow" can be managed by writing code that connects the individual tools or by using a tool that spans the repositories of the individual tools. SAP's Information Steward is an instance of that. This is pictured as "glue" in fig. 1.
    The freedom to more easily pick a tool of your own choice and the options to manually intercept and manipulate SQL provide a lot of flexibility and room to optimise. On the other hand, it pushes a lot more responsibility to the designers or administrators of the data warehouse. It also adds the task of integrating the tools "somehow". Beware that this is an additional task that adds revenue for an implementation partner.

SAP's Product Portfolio and the two Approaches

As can be seen from this discussion, each approach has its merrits; there is no superior approach but each one emphasises certain aspects. This is why SAP offers tooling for both approaches. This is perceived as a redundancy is SAP's portfolio if the existence and the merrits of the two approaches is not understood.

I frequently get the question whether BW will be rebuilt on SAP HANA. Actually, it is philosophical to a certain extent as BW evolves: e.g. today's BW 7.4 on HANA has less ABAP code in comparison to  BW 7.3 on HANA. This can be perceived as BW being rebuilt on HANA if you wish. However, what does not make sense [for SAP] is to kill the approach on the right-hand side of fig. 1 by integrating the "freestyle tools" into a second, highly integrated toolset which mimics the BW approach because that would simply remove the flexibility and freedom that the right-hand approach has. Fig. 2 pictures this.

Fig. 2: It does not make sense to implement two toolsets for the exact same approach.

What is true, however, is that HANA will see a number of Data Warehousing Services arising over time. They already do exist to some extent as they surged when BW was brought on to HANA. They can be generalised to be usable in a generic, non-BW case. Nearline storage (NLS), extended storage, HANA-based data store objects, a data distribution tool etc are all execellent examples for such services that can be used by BW but also by a freestyle approach.

Finally, I like to stress - again - the advantages of the HANA EDW approach that allows to arbitrarily combine the two approaches pictured in figure 1. You can watch examples (demos) of this here or here.


This blog has been cross-published here. You can follow me on Twitter via @tfxz.

We recently conducted a customer webinar focused on how our customers and partners are fundamentally changing their customer interactions using SAP HANA in the high-tech industry. Customers, thought leaders and partners such as ebay, Cisco, and Adobe presented their use cases and success stories on implementing SAP HANA.


Below are some of the highlights from the webinar from my live twitter feed @sajal_agarwal:







1. First #Cisco presented on how @saphana helps its business providing Dynamic Insights for Sales Executives (DISE)









2. #cisco @saphana @SAPhightech high level architecture provides dynamic insights for sales





3. wonderful #testimonials from #cisco customers implement DISE with@saphana @SAPhightech








4. What happens in an internet minute? @ebay implements @saphana for early signal detection







5. foundation, consolidation and innovation: 3 steps for #ebay early signal detection using @SAPInMemory



6. Learn @saphana helps #adobe transition to a more personalized and relevant user experience





7. #adobe & its customers achieve tremendous benefits implementing@SAPhightech @SAPInMemory








8. Join us for customer roadshow on August 19th with @SAPhightech@SAPInMemory #ebay #adobe #Cisco #idc #intel. Agenda below:




The link to the webinar replay is here.

Look forward to a wonderful physical event in Palo Alto and other remote locations. Will share the details and invitation with you soon!


Sajal Agarwal

Twitter: @sajal_agarwal

Weekends, as I noted in an earlier blog, are great for catching up on reading, but so apparently are waiting rooms. Something I recently discovered when I came across an interesting The Economist article in my dentist’s waiting room. By the way weekends are also great for watching sports, like a  IFA World Cup final, which after a very good game and valiant effort by Argentina, Germany finally won. And working for SAP I found it particularly satisfying given that SAP had a role in helping the German team prepare. A while back I wrote a blog “Football (Soccer) Meets Moneyball – Big Data enables wining enterprises” that I commented on a BBC news article discussing how Big Data can be used to shape performance and success, and how SAP is helping a premier football club of lever Big Data. (read it here) SAP is also successfully doing this at the national level as evidenced by the  success of the German national team. In her July 15 blog “Big Data & Spatial Analytics Help Germany Score the World Cup” Marie Goodell does an  excellent job of detailing this. (read it here)


But, I digress. So I broken a tooth that required a crown and thus a few visits to my dentist. Ensconced in the waiting room I came across a May 24, 2014, The Economist article “Digital Disruption on the Farm” which talks about how Big Data is helping the world progress in even the most traditional areas such, as  innovating and deriving value in farming. (read it here) In brief, Monsanto has derived a prescriptive planting system they call FieldScripts that provides optimization information on what seeds to plant, where, and how to cultivate each acreage. Basically, predictive analytics for farming. And to use predictive analytics - well you need data – lots of data. From weather history and patterns to soil types and conditions, to seed and other performance data; some publically available some proprietary. So the farmer benefits as they get evidence based advice on what to optimally plant and where given the prevailing conditions. Monsanto benefits, as they not only sell the service, but most likely the seeds, and gain insight into demand on inventory, both of which likely see increased profits and strengthen brand loyalty. However, these are not the only ones who can benefit from Big Data.


There are opportunities for everyone in the chain to benefit from Big Data. From suppliers like Monsanto and the farmers to those that make farm  equipment; who can use big data to predict what kind of equipment will be needed for the season and how to equip and deploy it. To those that make fertilizer, irrigation and other products used on the farm to produce the crops; who can they use big data and predictive analysis to see how much of what type of product and material will be in demand for the coming season. Down to transportation and logistics – how much and what kind of transportation equipment will be needed from and to where. Rail and trucking companies looking to optimize loads and routes can benefit from big data. In short everyone in the chain needs to use big data – or they will be left behind as someone else grabs the competitive advantage.


One other example of Big Data in agriculture – or agronomics – is John Deere who wanted to go beyond being a manufacturer and become a big data company. They too are looking at innovations in technology and big data to help increase farm yield as well as diversify and drive revenue. There is a  really good June 5, 2014 article in V3 UK here. Quickly John Deere wanted to help its customers enhance productivity, improve crop yield and reduce costs as well as help dealers provide value added services. And John Deere itself wanted to increase its own profitability and drive brand loyalty. They produced a service offering for a growing market by using telemetric data from their farm equipment and agronomic data. They have something that can help them get insight into trends and needs, driving smart inventory and production and they can minimize downtime of machinery in the field by offering preventative maintenance. All the objectives for all three groups above are met by using Big Data. Once again the entire ecosystem around this from parts manufactures to transportation can and must lever big data to stay competitive an succeed. Big Data is a new reality and a necessity for everyone – at least those that do not want to be left behind. Even in the oldest and most traditional of businesses.


Take care. And remember to brush your teeth.


    • Learn how the SAP HANA Platform for Big Data can help you take a simplified approach to making Big Data work for you in deriving true value from Big Data for your business at www.sap.com/bigdata
    • See how you can leverage data science services from SAP to uncover new signals hidden in your data and drive performance and results for your business at: http://www.sapbigdata.com/services

In discussions with customers on SAP HANA platform often breakthrough innovation and business value take center stage. From an IT perspective also cost and manageability are frequently discussed. The good news is: Now first publications show that implementing and running SAP HANA can provide cost savings even to the IT organization.

Forrester Consulting conducted a cost analysis based on its Total Economic Impact (TEI) methodology to show how implementing SAP HANA could help organizations cut both their IT expenditure and their business overheads. The study provides examples of how much it would cost to implement SAP HANA and how much an organization could save by doing so. Forrester Consulting found that organizations opting to implement SAP HANA could expect to see their software costs fall by more than 70 percent, their hardware costs by 15 percent, and their administration and development costs by 20 percent. Forrester interviewed several customers with multiple years of experience using the SAP HANA platform and conducted a survey of 25 additional SAP HANA platform users. Drawing on the experiences of these customers, as well as own expertise in this area, Forrester constructed a financial model to represent the potential savings associated with using SAP HANA as a replacement for a traditional database platform in several different ways. In this financial analysis, Forrester looked into the impact of using SAP HANA in conjunction with the SAP Business Warehouse (BW), SAP Enterprise Resource Planning (ERP), and a custom-developed application.

The cost savings were realized by reduced hardware and lower software costs, resulting six areas of simplification we have seen as being impacted from how SAP brings in-memory technology into business software: from a simpler data footprint, a simpler system landscape and simpler setup procedures for SAP HANA, as well as from more efficient application development processes and an increased productivity for administrators caused by simpler data processing and operations and a simpler user experience.

To learn more about how to unlock business value and deliver breakthrough Innovation while simplifying your IT and to download the Forrester study and ist executive summary, visit www.sap.com/innovation-and-me.

Oracle finally released their in-memory cache for the Oracle 12c database this week. With it, has already come a good quantity of anti-SAP HANA marketing, with the usual jousting that you would expect between enterprise software vendors. They released a comparison sheet between Oracle and SAP HANA, which is the usual mix of propaganda/marketing.


This is compounded because SAP has reseller agreements with Oracle and IBM, so they have to be very careful how they position SAP HANA's capabilities. I don't work for SAP, so I don't have to be careful. Furthermore, I didn't like the Oracle comparison because it's very database-centric, and businesses are application and process-centric. Here's an alternative view on the world.





Oracle 12c

Database in-memory



SAP Business SuiteOptimized for SAP HANA - 50% faster on average in response times.Not supportedNo support for BLU acceleration
SAP "S" SuiteSimplified Financials with SAP HANA - 50% less footprint including no totals or separate ledgers. Remainder of SAP Business Suite to follow.Not supportedNot supported
FioriFull support for free Fiori business applications including fact sheets and searchNo fact sheet support; search possible with HANA sidecar databaseNo fact sheet support; search possible with HANA sidecar database
SAP HANA LiveOptimized for SAP HANA, providing real-time operational reports for the Business Suite via virtual data models - no EDW required.Supported with a SAP HANA sidecar databaseSupported with a SAP HANA sidecar database
SAP BW (basic)Full support including full data model acceleration for all business areas including Inventory, complex calculation pushdown (exception aggregation), data load acceleration.Not supportedLimited support for BW InfoCube storage and acceleration
SAP BW (advanced)BW 7.4 is HANA-optimized including simplified LSA++ model, industry solutions e.g. Retail Point of Sale. Support for HybridProviders, CompositeProviders, integration with HANA Information Views.Not supportedNo advanced features
SAP BW Virtual ModelingComplete virtual modeling capability in BW.Not supportedNot supported
SAP BPCSupport for BPC 10.1 Unified Model (based on HANA PAK).Support only for BPC 10.1 Classic ModelSupport only for BPC 10.1 Classic Model
Code AccelerationSubstantial pushdown into the database for complex procedures using ADMPNot supportedNot supported
Data VirtualizationSupport for Smart Data Access data virtualization for Archiving/NLSNot supportedNot supported
Lumira ServerSupport for the Lumira Server analytics platformNot supportedNot supported
In-Memory PlatformHANA is an end-end cloud and on-premise platform with support for database, modeling, graph, predictive, spatial, text, search, integration and application services.Many separate components required for similar functionalityMany separate components required for similar functionality


Comparison Notes


Note that this is a comparison which is deliberately SAP application-centric. What's important to note is that "not supported" also means in many cases "not possible", in my opinion. For example, the "S" Suite of Simplified Applications is only possible because it's possible to calculate totals and ledgers on the fly. The design of Oracle and IBM's solutions mean that this simply isn't possible.


In the projects I'm working on, we find that the sheer number crunching capability of HANA means that we can solve problems we couldn't solve in DB2 or Oracle. For instance, we put together a real-time management dashboard for one company; this was based on 34 separate complex questions that displayed in one dashboard. Using the HANA platform, we could get an end-end page load on an iPad in under 3 seconds based on real-time data. There's no way that this could have been built on DB2 or Oracle - it took 30-40 seconds and that's not an acceptable response time for a mobile app.


The other thing that is worth noting is SAP's development direction. I recently spoke to a friend at SAP that put it nicely: "We spent the last 10 years optimizing our products for Oracle and IBM. Now we are spending some time optimizing for our database". Some customers have commented to me that they feel that SAP is investing overly heavily in R&D for HANA, at the expense of other databases, but I don't believe that is a fair assessment: HANA enables capabilities that other databases do not, and SAP is writing software to take advantage of these capabilities. If other databases could do the same things then SAP would enable them (and the software has in most cases been written to take this into account), but they can't do the things that HANA can do.


Oracle attacks HANA specifically on the topic of enterprise-readiness, but that's not what we see in the field. In fact HANA is so much easier to configure for High Availability that almost all of my customers use HA/DR scenarios. By contrast, I have almost never come across Oracle RAC, because it's notoriously difficult to set up. And in a recent customer, we had zero issues during user testing on a complex deployment. Not just zero open issues at the end of testing - not a single issue during user testing.


Some customers suggest that Oracle and IBM will catch up - but the reverse seems to be true so far. When I last looked at HANA and DB2 a year ago, SAP were around 2 years of development ahead of IBM, but that seems to have increased in the last year - HANA has become much more mature, and IBM haven't made any changes since the release of DB2 10.5. We don't see any SAP on BLU deployments at all or any live customer stories.


Final Words


This comparison will no doubt be controversial, but it's what we see in the field. I'd welcome feedback, or additional scenarios that aren't in this list. But my conclusion is that SAP are putting all their development efforts to optimizing applications on SAP HANA, and as a customer, you should take advantage of this.


Disclosure: I don't work for SAP and received no compensation for this piece, but I have consulted in the past with SAP on various areas of SAP HANA. I'm also a SAP Services Partner with a focus on SAP HANA though I have 16 years of experience with database technologies including Oracle, Microsoft, IBM and others.

News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.


Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions: