Home > Blog > Blog > 2012 > May

On May 23, 2012 a group of exceptionally successful and bright software builders and data analysts paid a visit to SAP Vancouver. They came from exciting Vancouver based startups for the SAP Startup Microforum – Vancouver.  The topic on everyone’s mind was big data challenges and specifically how HANA can help.

Witness the HANA-ready innovation powerhouse Vancouver has become by enjoying the following video:

In Vancouver


Godfrey Hobbs hosting the Panel in Vancouver

Over the years, a SAP developer like me may make hundreds of code check-in.  Thousands and thousands of hours go by as we focus on delivering the best possible products.  However, as we near our third or fourth hundredth check-in, we start to ask questions; big questions: Does this bug fix or new feature really matter?  Are SAP Business Analytic products still relevant? These startup builders provided a common and enthusiastic answer; “Hell Yes!”  SAP’s Business Analytics solution with HANA is relevant to companies of all sizes. Handling data correctly is crucial for business success. All of these startups have similar challenges when it comes to wrangling big data sets.  HANA and SAP Business Analytics are exactly what the “startup doctor” ordered.


The Players

Each of the following growing Vancouver startups are playing to win:


HootSuite is “The” social media dashboard with four million plus satisfied users.  They are very successful with a strong set of ‘very satisfied’ large corporate customers (McDonalds, Pepsico, P&G, Sony Music, etc).


Geordie Henderson, Hootsuites’ Director, API & Integrations



East Side Games Studio is a profitable data-driven social gaming company with an all-star suite of Facebook and iPhone games.


Michael Nathe from East Side Games Studio



Mobify is the leading mobile web platform for e-commerce & publishing (Starbucks, New Yorker, Wired Mag).


John Boxall Co-founder of Mobify


STAT offers enterprise-level SEO analytics and deals with huge amounts of data in real time.


Rob Bucci founder of STAT Search Analytics


Playing to Win

For startups playing-to-win means playing-smarter.  Our players range in size from the five friendly folks at STAT to Hootsuite with 150 people (and a 500 MM valuation.)  These companies have made their mark by staying focused on the customers via analytics. Each is relentlessly dedicated to their businesses’ big data.


Is HANA relevant to a five person company?

Rob Bucci, founder of STAT Search Analytics, sees HANA as absolutely relevant. HANA would allow him to instantly expand his value proposition to existing customers. Moreover, the fifty person East Side Games Studio (ESG) has a growing team of data analysts.  ESG’s Michael Nathe summed things up well. He sees a cloud based HANA solution (at the right cost) as an ideal long term strategy.  Their current strategy (AWS-based databases) is unmanageable; it is held together with strips and can only provide three months worth of data. Michael from ESG is aware that they are missing critical insights which would require at least twelve months of data. Their system struggles to keep up with the daily 10 Gb flow of telemetry data. ESG’s data analysts' and DBAs' time is consumed by juggling archives or making blind guesses at what queries may produce insights.  Ultimately, the final step for Michael Nathe from ESG would be to ‘put himself out of a job’ and automate the insight-to-action Virtuous Cycle. In this reality, HANA would modify game play automatically based on player’s analysis.

Build, Measure, Learn.

Like Eastside Games, Mobify and Hootsuite live the “Lean Startup Mantra”: Build, Measure, Learn (then repeat).  Due to the scale of data, the build, measure, learn cycle can often only be completed on a weekly basis. Each of these companies sees a future with the cycle calculated in real-time occurring several times a day. Mistakes will be perceived instantly and opportunities seized on the fly.

Find out more: http://spr.ly/SAPStartups


A special thanks to our guests; to Mark Robins, Rachel Floyd and the rest of the EBC/IT staff; to our HANA experts: Patrice Le Bihan and Alex MacAulay; to Geoff Peters for filming; and to Kaustav Mitra for his involvement and encouragement.

SAPPHIRE NOW from Orlando 2012 was a great event, and this panel discussion was amazing. About 200+ people joined the Big Data customer panel discussion on the first day of SAPPHIRE NOW. The panel consisted of the University of Kentucky,Medtronic, Deloitte and McKesson. Chris Curtis, the head of SAP HANA services, was the host of this customer panel discussion. The customers were discussing how they use SAP HANA to solve their big data challenges and shared their experience with the audience.  From our customers’ experience, we found out that there is no unique big data problem across customers, but a different use-case for each company. As you can see in the picture below, the session was packed with standing room only.



The whole conversation expanded around the customers’ business challenges and pain points and all of them had significant challenges with big data. However, each customer faces multiple unique problems: such as the student retention rate challenge, the management of customer complaints or the huge data processing for future changes of the healthcare law.  During the Q&A session, the audience engaged and asked many questions to our customers.  Most of questions were asked around implementation details and the experience with SAP HANA such as how to deal with maintenance issue once SAP HANA is implemented.


You can watch a replay of this session and get your own insights and opinions on the customer's experience with SAP HANA.

Let me know what you think and how you experienced SAP HANA at SAPPHIRE NOW from Orlando.


Every sales manager works towards one primary goal – to make their numbers at the end of each quarter (and hope that the quarter was a good one!) What they would like to do is track all the opportunities that their sales team is working on, see all the opportunities as they are entered in the system, and track them through each sales phase. And now sales managers can! (check out this blog for an overview of Sales Pipeline Analysis powered by HANA).


Sales Pipeline Analysis with HANA is a rapid deployment solution that delivers reporting on three key sales areas: Opportunities, Quotations, and Contracts. This rapid deployment solution includes all the required HANA content and all of the necessary steps to configure and implement them. The software component is part of the HANA license, i.e. customers do not need to buy it separately. The services do have a separate cost and ensure that this is enabled in your systems within only 4 to 5 weeks!

The only tool needed for analysis would be the SAP Business Explorer connected to the HANA appliance – the most intuitive tool available at the fingertips of a sales executive on the go! The analysis can be done on a computer or on a mobile device like the iPad. Sales managers can figure out the total number of deals for a quarter or the net value of a single opportunity entered into the system an hour ago.  They can follow up on that single opportunity to see if there was a quotation created for it and many other metrics for further exploration:



Now an executive is empowered with the knowledge of the status of their pipeline at any instant. So they know which opportunities are in the final phase and need to be pushed to close before the end of quarter. No more waiting for a report to be delivered with one week old data.

As I sank back with a sigh into my seat on the flight back from SAPPHIRE NOW 2012 (Orlando, Florida), a fellow passenger looked up at me – I felt I had to explain that it had been a high-energy week, and I was a tad bit drained. He, interestingly, asked me what would stand out in my mind if I distilled all my interactions and activity from earlier in the week. That got me thinking. Then, it was clear.


More than the inspiring keynote addresses from SAP’s leaders and the introduction of new capabilities from SAP, I found an interesting recurring theme amongst attendees – customers and partners alike. I found that there is now an expectant air about them as they have started to look at different ways in which SAP HANA can help them. Somewhere deep within all the buzz surrounding SAP HANA, there is a growing realization amongst those in the ecosystem that the muscle and power of SAP HANA is a strong weapon in achieving unprecedented advantages – especially as enterprises seek to drive forward looking in front of them and not so much in the rearview mirror. I was particularly intrigued that two of my conversations were coincidentally based on quite similar business process needs – that of gaining predictive capabilities.


First, I had a discussion with a start-up company called Basis Technologies International (BTI). They were showcasing an app called, Predictive Customer Insight (PCI), powered by SAP HANA. In their words, PCI is designed to enable “organizations to develop a deep understanding of their customers' behavior based on data held within millions of individual data transactions.  PCI identifies at-risk customers by predicting how they may react in the future based on the way in which similar customers acted historically according to patterns in data transactions.  For example, it is possible to isolate the transactional events - such as inaccurate billing or repeat customer service calls - that often led to customers leaving.  This data is then used by PCI (a SAP HANA based application) to ‘fingerprint’ customers so that managers can divert attention to retaining their highest risk customers in order to improve business profitability.” The key business focus here is to grant enterprises, such as Utilities, the ability to manage customer satisfaction dynamically thus making a direct financial impact. This is accomplished by using historical data (SAP and non-SAP) to help predict what customers might do in the future; PCI analyzes millions of actual interaction events across customers and identifies patterns of behavior. These patterns are then used to segment the existing customer base appropriately, and then, looking at related current data in real time, customer issues/problems are identified and addressed.


The key technological process needed here is the ability to analyze millions of historic customer interaction events (so preventive action might be taken in a timely fashion) – with SAP HANA this can be done without having to be concerned about how much data can be handled. This last point is significant because without SAP HANA, this data-intensive process could easily cause the operational system to be shut down for hours or even days at a time preventing normal operations (e.g., billing) – obviously, an unacceptable scenario.  As I see it, contrary to what certain products from SAP’s competitors might claim, I do not see anything else on the market that could guarantee such performance; these other products are burdened with having to limit how much data their products can handle, and often have to resort to separate steps of tuning, indexing, or data caching – all of which rob the business of a true real-time experience when dealing with large volumes of data.


Second, when I was in a conversation about an overall solution map with the CTO of a global Fortune 100 company, he proceeded to share with me his views on how SAP HANA can help in several different business scenarios. In fact, he described the use case of a project his team is currently working on; when completed, they will be able to use the power of SAP HANA to gain the ability to perform predictive profit and loss statements dynamically, and as often as needed. Today, this is a manual (and somewhat painful, I believe), activity where the results can never be truly real-time. The unique nature of the business for this enterprise is such that commodity prices have a direct and significant impact on the profitability of each of its units. This company has had some challenges taking static data, mashing it up with historical trends plus dynamically changing commodity prices, and then coming up with accurate predictions of what the profitability of each of its units might be at the end of a fiscal year. With SAP HANA, what used to be a fairly cumbersome, time-consuming, and static activity performed at regular intervals can now be translated into a simple, truly real-time analysis drawing on data that sits across various platforms and in various places across the organization, and beyond. The ability to use SAP HANA to power a next generation app that actually does this on the fly, and as often as needed will be a tremendous boost to this company’s operations. With a far more accurate prediction of profitability, the company will be able to better utilize its resources leading to significantly greater operational efficiencies.


This is only possible because SAP HANA can address data from multiple sources (SAP and non-SAP) and is not limited to smaller data sets unlike some other products such as Oracle Exalytics which from all accounts seems to have a meager 1 TB limit with even less available for actual in-memory data. This is an important factor as the process calls for processing large volumes of data from a myriad of sources. This app should be a boon for planning purposes as it will provide actionable insight, and for a company with millions at stake, this can truly be a differentiator.


It is exciting that in both these examples – one from a SAP partner and another from a customer – they are seeking to develop capabilities that are going to provide the ability to look ahead and not merely in the rear view mirror as an enterprise drives forward. This is something businesses have long wished for and then given up on because historically “real-time” in many situations was not truly real-time, and when it came to large data sets they were always challenged with having to pre-treat the data and in the process often made decisions that limited the scope of the analysis.


In fact, their excitement about SAP HANA in this context is very special vis-à-vis anything else on the market, because unlike the products seeking to position themselves as its alternatives, SAP HANA can handle large data sets (successful tests with 100 TB done recently) without any pre-aggregation, tuning, or data caching, and since it was not cobbled together with a number of other products it does not have some of the limitations that these products (e.g. Oracle Exalytics) do. In his keynote address at SAPPHIRE NOW this past week, Vishal Sikka (Head of Technology & Innovation and Member of the Executive Board, SAP AG) made it demonstrably clear that SAP HANA is designed and built to enable new solutions, and has the robustness and muscle needed to provide the agility enterprises seek. In addition, SAP press releases from last week (http://bit.ly/KESMaI and http://bit.ly/Kt6UlL) provide further evidence that the reach and impact of SAP HANA for business value continues to grow.


In the weeks ahead, I will be selectively highlighting other business use cases where SAP HANA is directly delivering business value. For a dynamic listing of SAP HANA use cases, please visit the Resources page at Experience SAP HANA.


P.S. You can follow my other blog posts at: Café Innovation Blog on the SAP Community Network (SCN)

Are you a SAP HANA Developer? Have you ever asked yourself “How could I get such an in-memory box?” You might have wanted to try it, but this “high-end” system infrastructure stopped you to move forward. 

That's why recently, at SAPPHIRE NOW from Orlando 2012, SAP announced exciting news for the HANA developers. SAP HANA is available on the Amazon Web Services (AWS) platform at no cost to developers. For the first time, developers can get access to our in-memory technology through a highly-scalable and low-cost infrastructure-as-a-service. Using a simple pay-as-you-go model, developers can possess a HANA server on AWS by only paying Amazon a low hourly fee when they need to use it.You do not need to have large capital investment and provides immediate availability. This solution is compelling and attractive, and it is a major step for SAP to build-out a global HANA developer ecosystem. Through AWS service, SAP HANA developers can have HANA instances with a HANA development edition license. If you are a developer, sign up through the SAP DevCenter for SAP HANA developer instances and pay only for the usage of the system, no developer license fees, no up-front investment or long term commitments.

Read more blogs on these great news, e.g. by Dennis Howlett "Good news for SAP HANA developers: free is a four letter word" and David Hull "SAP HANA Available on AWS for Free!" or go to the official SAP Press Release.


Attached is the installation guide to create a HANA instance on AWS! What are you waiting for?

Hi everyone


Well, we finally made it to the launch date and nobody died waiting (although I got quite a few emails begging and pleading for advanced copies).  I recorded a little video blog to introduce the book.  Here's the URL to download both the EPUB (iOS/PC/Mac) and MOBI (Kindle) versions.  www.saphanabook.com At the end, I put in the voucher code for everyone to get a free copy of the book.  There will be several other voucher codes flying around the internet as we go.



Hope you enjoy the first few chapters and remember to make sure to sign up for the mailing list to find out when new chapters are available.


Happy Learning

Advanced flexible analytics requires the ability to interactively model data queries and analytics as well as prediction operations to answer questions that have not been pre canned in an analytics system.  Asking these types of non standard questions should neither require in-depth expert knowledge on database systems or database query languages nor should it be blocked by long response times or lead times due to data preprocessing such as cube building and thus dependency on an IT department.  Experimenting with the data, modeling various slightly different scenarios and quickly switching forward and backward between various queries or refining various stages in the processing pipe provides a freedom and speed in working that makes an analysts mind fly.

ANKHOR is a real time data analytics tool that uses in Memory Technology to provide almost instantaneous user feedback while graphically manipulating flexible data analytics workflows.  It enables an analyst to combine various data processing or visualization operations to generate complex processing workflows or what-if scenarios that include several different and potentially unstructured or heterogeneous data sources without the need to express those in complex database lingo.  The dataflow and intermediate data is visible to the analyst and removes the opaqueness of traditional query processing.

Before incorporating SAP HANA, ANKHOR excelled only with datasets comparable to the size of the main memory of the analyst’s machine.  A second but sometimes worse limitation was the simple requirement of getting the data onto the analyst’s system, which proved problematic in cases that involved huge or continuously updating data sets.  Data gravity pulls the actual evaluation towards the data storage entity and thus results in limited scalability for local in memory analytics systems.

Using SAP HANA™ as a data compute engine and seamlessly integrating it with the graphical modeling language of ANKHOR provides an analyst with the ability to ask questions in a flexible yet responsive way on datasets that are far larger than what would be feasible with a traditional disk based database or local in memory processing.  The advanced analytics capabilities of the SAP HANA™ in memory database provides the means to have the processing of various predictive or analytics operations local to the data, and thus executed with maximum performance, while still maintaining the flexibility for the data analyst in combining various operations.

Despite of the analytics modeling being performed locally on the ANKHOR Workstation, all calculations and processing on big data sets are executed on the SAP HANA™ server.  There is thus no need to transfer huge amounts of data from the database to the analyst’s machine thus enabling the analyst to work remote and connected by only a thin pipe to the data center.

The aim of the SAP HANA – ANKHOR integration was to increase the applicability of the flexible ANKHOR way of modeling advanced data analytics to significantly larger data sets without sacrificing the interactivity of its usage.  This goal could not have been reached with another database than SAP HANA, which offers unprecedented performance and functionality through it’s in memory design and integrated predictive analytics library. ANKHOR is a participant in the SAP Startup Focus program, look for us at the HANA Test Drive area in the D&T campusm at SAPPHIRE NOW.


For more information visit our website at http://www.ankhor.com.

Screenshot 1: SAP HANA Analytic Workbench in action


Screenshot 2: SAP HANA Interactive Exploration


With SAPPHIRE NOW almost upon us, I wanted to introduce a new tool we've developed to help you "visualize" social media conversations around SAP HANA. We've dubbed this a "Conversation Heat Map" and the concept is quite simple: By mashing up Twitter + Google Maps, you can view tweets in real-time from people around the world. In this first iteration, you can view all tweets related to SAP HANA, and toggle between geographies as well as top SAP accounts and SAP HANA partners.


Check out this link to see it in action. Would love to get your thoughts and feedback!



SAPPHIRE Orlando is just around the corner and the event promises to offer a host of activities  for attendees to learn more about BW Powered by HANA. As the lead marketing person for BW on HANA at SAP, I’ve been paying close attention to the growing list of opportunities that exist across the show floor at both SAPPHIRE and ASUG that specifically focuses on BW on HANA.

For anyone attending the event in person (or via the SAPPHIRE virtual platform), I’ve listed a number of key activities that I think will provide great opportunities to learn more about BW Powered by SAP HANA:

  • See BW on HANA in action at the HANA Test Drive located at the Database and Technology Campus
  • Listen to Vishal’s keynote on Wed May 16 to hear his views on the value and power of BW on HANA
  • Attend various customer led sessions to hear directly from other customers on how they’ve leveraged BW on HANA in their own IT landscape
    • Monday, May 14, 1:30-1:50pm,Analytics Campus Theatre:  See How Lenovo Uses SAP HANA to Supercharge SAP NW BW
    • Monday, May 14, 3:30-5:30pm, Database and Technology Campus Theatre:  How Hilti Uses SAP HANA to Supercharge SAP NW BW
    • Tuesday, May 15, 3-4pm, Broadcast Center:  Interview with Home Trust
    • Tuesday, May 15, 4-4:30pm, Database & Tech Campus Theatre: BW HANA Customer Panel (Featuring Eby-Brown, USHA, SCE, Colgate

Other opportunities include:

    •     Monday, May 14, 11:00-11:15am, Analytics Campus Demo Theatre:  Live Demo of BW on HANA
    •     Monday, May 14, 3-4pm, Microforum (Analytics Campus):  Explore the role of SAP NW BW
    •     Tuesday, May 15, 11am-12pm, Microforum (D&T Campus):  Turn your BW into a real-time EDW
    •     Over 5 ASUG educational sessions focused on BW on HANA led by SAP Experts or SAP Partners


As you can see, we have a great lineup of activities  at SAPPHIRE to help you learn more about the value of BW Powered by HANA.

Hope to see you there in person or virtually!

In my last blog on What Does SAP Deliver on Top of SAP HANA, I was already referring to the coming SAPPHIRE Now and ASUG Annual Conference event that will take place next week - May 14-16, 2012 - in Orlando. This event combines the world's premier business technology with the largest SAP user conferences and will be a fantastic opportunity to not only discover innovative solutions powered by SAP HANA, network with your peers but also learn how best-run businesses are leveraging the power of SAP HANA to derive new business value.

SAP HANA was actually announced at the same event, close to one year ago. So, what is the customer situation today ?

As I was preparing SAPPHIRE NOW with customers, I was extremely enthusiastic about listening to these new stories from different businesses, covering multiple lines of business and representing high value use-cases. That’s why I thought I would share with you a preview of these new real-life HANA use-cases that you will discover at the event.

Kraft Foods: Going Beyond Real-Time Analytics

The world’s second largest food company clearly identified SAP HANA as a major innovation not only for more powerful analytics but also as a true data management platform to provide the best information support to the business by bringing together analytical and transactional worlds. “We see SAP HANA not only being used for real-time analytics but also as a comprehensive and scalable BI platform that has the opportunity to transform the way we process and consume information at Kraft Foods.  While our business is excited about new opportunities , the IT benefits alone will have a dramatic impact on the delivery and support of our core information needs along with the creation of new analytical services.” said Kelli Such, Director Global Business Intelligence at Kraft Foods.

Automotive Resources International (ARI): Growing The Business

Automotive Resources International (ARI) rolls out cars around the world. The company provides fleet leasing and management services through locations in the US, Canada, Mexico, Puerto Rico, and Europe. ARI considers the SAP HANA platform as a key vehicle of their growth strategy to fully exploit the potential of their big data and create new value across all lines of business. Steve Haindl, Chief Information Officer at ARI, says that the processes involved in vehicle fleet management both consumes and produces an ocean of data. In the last five years, he says, the impact of the data has grown dramatically. “We are taking data from a myriad of sources, from parts dealers, firms called upfitters that enhance vehicles, repair shops, gas stations, and so on,” said Haindl. “SAP HANA helps us combine all of that into a form that supports decisions, allows for exploration and diagnosis, helps make predictions, and provides the foundation for new products.”

John Deere: Real-Time Project Management Reporting

John Deere also has a fascinating, highly innovative use-case with SAP HANA. This leading global equipment manufacturer bolsters PLM and Collaboration Folders (cFolders) applications with the power of SAP HANA to make dramatic improvements across all product management processes. “It is a necessity in our global product development process to have large amounts of data available on demand to the user base. Shortened program schedules, expanding markets, and governmental regulations have forced an environment where program teams are constantly analyzing critical control and success factors to meet the needs of customers and overall program milestones. SAP HANA combined with BI will bridge the gap we have faced to provide the end-user real time data in formats that are user friendly and timely.” said Derek Dyer, John Deere Director – Global SAP Services


AmerisourceBergen: Getting Detailed Insights

AmerisourceBergen Corporation is one of the world's largest pharmaceutical services companies serving the United States, Canada and selected global markets with a focus on the pharmaceutical supply chain. The primary motivation of AmerisourceBergen for deploying SAP HANA was to allow detailed reporting to more precisely detect cost deviations and identify new opportunities to increase revenue.We see the SAP HANA platform as a new way to help us generate more detailed corporate finance and customer reporting, and much faster than before.” said Milt Simonds, Director Enterprise Platform Delivery at AmerisourceBergen.


You will be able to learn more about these stories at SAPPHIRE NOW event - whether you attend physically or remotely - by following these sessions:

Many more customers will be sharing their HANA story at the event. I strongly recommend you to visit the SAPPHIRE + ASUG Annual Conference in Orlando (14-16 May 2012) webpage to build your own agenda for the three days (see Agenda Builder section).

Personally, I will have the pleasure to moderate the sessions indicated above and also to demo new solutions powered by SAP HANA for real-time reporting, analysis and planning. I hope to see you in the HANA Village !

In the meantime, and as usual, I welcome your comments and thoughts on this blog.



SAP has asked me to comment on the "HANA vs. Exalytics" controversy from an analyst's point of view.  I think it's an interesting comparison, so I'm happy to comply.  In this piece, I'll try to take you through my thinking about the two.  As I think you'll see, I don't quite toe the party line for either company.  (Please see the end for full disclosure about my relationship with SAP.)




To begin with, let's start with something that every analyst knows:  Oracle does a lot of what in the retail trade are called "knockoffs." 


They think this is good business, and I think they're right.  The pattern is simple.  When someone creates a product and proves a market, Oracle (or, for that matter, SAP) creates a quite similar product.  So, when VMWare (and others) prove Vsphere, Oracle creates Oracle VM; when Red Hat builds a business model around Linux, Oracle creates "Oracle Linux with Unbreakable Enterprise Kernel."


We analysts know this because it's good business for us, too.  The knockoffs--oh, let's be kind and call them OFPs for "Oracle Followon Products"--are usually feature-compatible with the originals, but have something, some edge, which (Oracle claims) makes them better. 


People get confused by these claims, and sometimes, when they get confused, they call us. Like any analyst, I've gotten some of these calls, and I've looked at a couple of the OFPs in detail.  Going in, I usually expect that Oracle is offering pretty much what you see at Penneys or Target, an acceptable substitute for people who don't want to or can't pay a premium, want to limit the number of vendors they work with, etc., etc., aren't placing great demands on the product, etc.


I think this, because it's what one would expect in any market.  After all, if you buy an umbrella from a sidewalk vendor when it starts to rain, it's a good thing and it's serviceable, but you don't expect it to last through the ages. 


Of course, with software, it's much more confusing than it is with umbrellas.  The software industry is not particularly transparent, so it's really difficult, sometimes, to figure out the truth of the various claims.  Almost always, you have to dig down pretty deep, and when you get "down in the weeds," as my customers have sometimes accused me of being, you may end up being right, but you may also fail to be persuasive.


Which brings me to the current controversy.  To me, it has a familiar ring. SAP releases HANA, an in-memory database appliance.  Now Oracle has released Exalytics, an in-memory database appliance. And I'm getting phone calls. 

HANA:  the Features that Matter

I'm going to try to walk you through the differences here, while avoiding getting down in the weeds. This is going to involve some analogies, as you'll see.  If you find these unpersuasive, feel free to contact me.


To do this, I'm going to have to step back from phrases like "in-memory" and "analytics," because now both SAP and Oracle using this language and look instead at the underlying problem that "in-memory" and "analytics" are trying to solve.


This problem is really a pair of problems.  Problem 1.  The traditional row-oriented database is great at getting data in, not so good at getting data out.  Problem 2.  The various "analytics databases," which were designed to solve Problem 1--including, but not limited to the column-oriented database that SAP uses--are great at getting data out, not so good at getting data in. 


What you'd really like is a column-oriented (analytics) database that is good at getting data in, or else a row-oriented database that is good at getting data out.


HANA addressed this problem in a really interesting way.  They made a database where you get to choose how to treat the data, as either row-oriented or column-oriented.  (If you want, imagine there's a software switch that you can throw.)  So, if you want to do something that requires read-optimization, that is, the very fast and flexible analytic reporting that column-oriented databases are designed to do, you throw the switch and in effect tell HANA, "I want to run reports."  And if you want to do something that requires write-optimization, like doing the transactions that row-oriented databases are designed to do, you throw the switch and tell HANA, "I'm entering a transaction."


Underneath, the data is the same; what this imaginary switch throws is your mode of access to it.


In explaining this to me, my old analyst colleague, Adam Thier, now an executive at SAP, said, "In effect, it's a trans-analytic database."  (This is, I'm sure, not official SAP speak.  But it works for me.)  How do they make the database "trans-analytic?"  Well, this is where you get down into the weeds pretty quickly.  Effectively, they use the in-memory capabilities to do the caching and reindexing much more quickly than would have been possible before memory prices fell.


[Hand-Waving Alert:  After Hasso read an earlier version of this blog, he stopped me and said, "David, you know this row/column 'switch' idea is wrong. There's no magic wand that you wave. It's more complicated than that." As you can see from the comments on the blog post, others objected to the "switch" word as well.  So yes, let me acknowledge it.  I'm doing some hand-waving here.  So, for those of you interested in a more detailed explanation, check out my next blog post, "Row <it>and</it> column."]


There's one other big problem that the in-memory processing solves.  In traditional SQL databases, the only kind of operation you can perform is a SQL operation, which is basically going to be manipulation of rows and fields in rows.  The problem with this is that sometimes, you'd like to perform statistical functions on the data: do a regression analysis, etc., etc. But in a traditional database, statistical analysis (or sometimes even simple numerical calculations) can be complicated and difficult.


In HANA, though, business functions (what non-marketers call statistical analysis routines) are built into the database.  So if you want to do a forecast, you can just run the appropriate statistical function.  It's less cumbersome than a pure SQL database.  And it's very, very fast;  I have personally seen performance improvements of three orders of magnitude.



Exalytics:  the Features that Matter


Now when I point out that HANA is both row-oriented (for transactions) and column-oriented (so that it can be a good analytics database) and then I point out that it has business functions built-in, I am not yet making any claim about the relative merits of HANA and Exalytics.


Why?  Well, it turns out that with Exalytics, too, you can enter data into a transaction-oriented database and you can do reporting on the data in an analytics database.  And in Exalytics, too, you have a business function library.


But the way it's done is different.


In Exalytics, the transactional capabilities come from an in-memory database (the old TimesTen product that Oracle bought a little more than a decade ago).  The analytics capabilities come from Essbase (which Oracle bought about 5 years ago), and the business function library is an implementation of the open-source R statistical programming language. 


So, Oracle would argue, it has the same features that matter.  But, Oracle would also argue, it also has an edge, something that makes it clearly better.  If you get Oracle's Exalytics, you're getting databases and function libraries that are tested, tried, and true.  TimesTen has been at the heart of Salesforce.com since its inception.  Essbase is at the heart of Hyperion, which is used by much of the Global 2000.  And R is used at every university in the country.


Confused?  Well, you should be. That's when you call the analyst.



HANA vs. Exalytics

So what is the difference between the two, and does it matter?  If you are a really serious database dweeb, you'll catch it right away: 


In HANA, all the data is stored in one place. In Exalytics, the data is stored in different places.


So, in HANA, if you want to report on data, you throw that (imaginary) switch.  In Exalytics, you extract the data from the Times10 database, transform it, and load it into the Essbase database.  In HANA, if you want to run a statistical program and store the results, you run the program and store the results.  In Exalytics, you extract the data from, say, Times10, push it into an area where R can operate on it, run the program, then push the data back into Times10.


So why is that a big deal? Again, if you're a database dweeb, you just kind of get it.  (In doing research for this article, I asked one of those dweeb types why it was such a big deal, and I got your basic shrug-and-roll-of-eye.) 


I can see it, I guess.  Moving data takes time.  Since the databases involved are not perfectly compatible, one needs to transform the data as well as move it. (Essbase, notoriously, doesn't handle special characters, or at least didn't use to.) Because it's different data in each database, one has to manage the timing, and one has to manage the versions.  When you're moving really massive amounts of data around (multi-terabytes), you have to worry about space.  (The 1TB Exalytics machine only has 300 GB of actual memory space, I believe.)


One thing you can say for Oracle.  They understand these objections, and in their marketing literature, they do what they can to deprecate them. "Exalytics," Oracle says, "has Infiniband pipes" that presumably make data flow quickly between the databases, and "unified management tools," that presumably allow you to keep track of the data. Yes, there may be some issues related to having to move the data around. But Oracle tries to focus you on the "tried and true" argument. So what, it essentially says, if you have to move the data between containers, when each of the containers is so good, so proven, and has so much infrastructure already there, ready to go.


As long as the multiple databases are in one box, it's OK, they're arguing, especially when our (Oracle's) tools are better and more reliable. 


Still confused?  Not if you're a database dweeb, obviously.  Otherwise, I can see that you might be.  And I can even imagine that you're a little irritated. "Here this article has been going on for several hundred lines," I can hear you saying, "and you still haven't explained the differences in a way that's easy to understand." 


[Update alert. In the comments below, an Exalytics expert says that I've mischaracterized the way customers would actually use Exalytics.  If he's right (and I'm sure he is), the two products are not in fact as comparable as Oracle marketing would seem to have it and you should read the rest of this simply as a description of what HANA is all about, just taking it for granted that HANA is sui generic.]



HANA:  the Design Idea

So how can you think of HANA vs. Exalytics in a way that makes the difference between all-in-one-place and all-in-one-box-with-Infiniband-pipes-connecting-stuff completely clear? It seems to me that the right way is to look at the design idea that's operating in each. 


Here, I think, there is a very clear difference.  In TimesTen or Essbase or other traditional databases, the design idea is roughly as follows: if you want to process data, move it inside engines designed for that kind of processing. Yes, there's a cost. You might have to do some processing to get the data in, and it take some time.  But those costs are minor, because once you get it into the container, you get a whole lot of processing that you just couldn't get otherwise.


This is a very normal, common design idea.  You saw much the same idea operating in the power tools I used one summer about forty years ago, when I was helping out a carpenter.  His tools were big and expensive and powerful--drill presses and table saws and such like--and they were all the sort of thing where you brought the work to the tool. So if you were building, say, a kitchen, you'd do measuring at the site, then go back to the shop and make what you needed.


In HANA, there's a different design idea:  Don't move the data.  Do the work where the data is.  In a sense, it's very much the same idea that now operates in modern carpentry.  Today, the son of the guy I worked for drives up in a truck, unloads a portable table saw and a battery-powered drill, and does everything on site and it's all easier, more convenient, more flexible, and more reliable. 


So why is bringing the tools to the site so much better in the case of data processing (as well as carpentry?) Well, you get more flexibility in what you do and you get to do it a lot faster. 


To show you what I mean, let me give you an example.  I'll start with a demo I saw a couple of years ago of a relatively light-weight in-memory BI tool.


The salesperson/demo guy was pretty dweeby, and he traveled a lot.  So he had downloaded all the wait times at every security gate in every airport in America from the TSA web site.  In the demo, he'd say, "Let's say you're in a cab.  You can fire up the database and a graph of the wait-times at each security checkpoint.  So now you can tell which gate to stop at."


The idea was great, and so were the visualization tools.   But at the end of the day, there were definite limitations to what he was doing.  Because the system is basically just drawing data out of the database, using SQL, all you were getting were lists of wait times, which were a little difficult to deal with.  What you really wanted was the probability that a delay would occur at each of the gates, based on time of day and a couple of other things.  But you sure weren't getting that in the cab.


Perhaps even worse, he wasn't really working with real-time data.  For this purpose, by far the most important data is the most recent data, but he didn't have that; he couldn't really handle an RSS feed. 


Now, consider what HANA's far more extensive capabilities do for that example.  First of all, in HANA, data can be imported pretty much continuously.  So if he had an RSS feed going, he could be sure the database was up-to-date.  Second, in HANA, he could use the business functions to do some statistical analysis of the gate delay times. So instead of columns of times, he could get a single, simple output containing the probability of a delay at each checkpoint.  He can do everything he might want to do in one place.  And this gives him better and more reliable information.



So What Makes It Better?

Bear with me.  The core difference between HANA and Exalytics is that in HANA, all the data is in one place.  Is that a material difference?  Well, to some people it will be; to some people, it won't be.  As an analyst, I get to hold off and say, "We'll see." 


Thus far, though, it appears that it is material.  Here's why.


When I see a new design idea--and I think it's safe to say that HANA embodies one of those--I like to apply two tests.  Is it simplifying?  And is it fruitful? 


Back when I was teaching, I used to illustrate this test with the following story:


A hundred years ago or so, cars didn't have batteries or electrical systems.  Each of the things now done by the electrical system were thought of as entirely separate functions that were performed in entirely different ways.  To start the car, you used a hand crank.  To illuminate the road in front of the car, you used oil lanterns mounted where the car lights are now. 


Then along came a new design idea: batteries and wires.  This idea passed both tests with flying colors. It was simplifying.  You could do lots of different things (starting the car, lighting up the road) with the same apparatus, in an easier and more straightforward way (starting the car or operating the lights from the dashboard).  But it was also fruitful.  Once you had electricity, you could do entirely new things with that same idea, like power a heater motor or operate automatic door locks.


So what about HANA? Simplifying and fruitful?  Well, let's try to compare it with Exalytics. Simplifying?  Admittedly, it's a little mind-bending to be thinking about both rows and columns at the same time.  But when you think about how much simpler it is conceptually to have all the data in one database and think about the complications involved when you have to move data to a new area in order to do other operations on it, it certainly seems simplifying.


And fruitful? 


Believe it or not, it took me a while to figure this one out, but Exalytics really helped me along.  The "Aha!" came when I started comparing the business function library to the "Advanced Visualization" that Oracle was providing.  When it came to statistics, they were pretty much one-to-one; the HANA developers very self-consciously tried to incorporate the in-database equivalents of the standard statistical functions, and Oracle very self-consciously gave you access to the R function library.


But the business function library also does…ta da…business functions, things like depreciation or a year-on-year calculation.  Advanced Visualization doesn't.  


This is important not because HANA's business function library has more features than R, but because HANA is using the same design idea (the Business Function Library) to enrich various kinds of database capabilities.  On the analytics side, they're using the statistical functions to enrich analytics capabilities.  On the transaction side, they're using the depreciation calculations to enrich the transaction capabilities.  For either, they're using the same basic enrichment mechanism. 


And that's what Oracle would find hard to match, I think. Sure, they can write depreciation calculation functionality; they've been doing that for years.  But to have that work seamlessly with the Times10 database, my guess is that they'd have to create a new data storage area in Exalytics, with new pipes and changes in the management tools.



Will HANA Have Legs?

So what happens when you have two competing design ideas and one is simpler and more fruitful than the other? 


Let me return to my automobile analogy. 


Put yourself back a hundred years or so and imagine that some automobile manufacturer or other, caught short by a car with a new electrical system, decides to come to market ASAP with a beautiful hand-made car that does everything that new battery car does, only with proven technology.  It has crisp, brass oil lanterns, mahogany cranks, and a picture of a smiling chauffeur standing next to the car in the magazine ad.


The subtext of the ad is roughly as follows. "Why would you want a whole new system, with lots and lots of brand-new failure points, when we have everything they have.  Look, they've got light; we've got light, but ours is reliable and proven. They've got a starter; we've got a starter, but ours is beautiful, reliable, and proven, one that any chauffeur can operate."


I can see that people might well believe them, at least for a while.  But at some point, everybody figures out that he guys with the electrical system have the right design idea.  Maybe it happens when the next version comes out with a heater motor and an interior light.  Maybe it happens when you realize that the chauffeur has gone the way of the farrier. But whenever it happens, you realize that the oil lantern and the crank will eventually fall by the wayside.


About the Author

I run a small analyst firm in Cambridge, Massachusetts that does strategy consulting in most areas of enterprise applications.  I am not a database expert, but for the past year, I have been doing a lot of work with SAP related to HANA, so I'm reasonably familiar with it.  I don't work with Oracle, but I know a fair amount about both the Times 10 database and the Essbase database, because I covered both Salesforce (which uses Times 10) and Hyperion (Essbase) for many years.


SAP is a customer, but they did not commission this piece, and they did not edit or offer to edit it in any way.

You are coming to SAPPHIRE NOW and HANA is one of the solutions that you want to better understand? You need to visit the “Building the Business Case for SAP HANA” table. Join me David Porter, Director, Value Engineering and Steve Thibodeau, Value Architect as we are managing the table and focused on articulating the value of HANA and Analytics. The biggest challenge may be to return from SAPPHIRE NOW and figure out how to gain organization support to invest in such a “game-changing” solution.


If you need help:

  • In securing sponsorship and funding for your HANA investments
  • Understanding the business benefits that you can derive from SAP HANA
  • Struggling to identify the business scenarios and generate organization buy-in 
  • Articulating the value of analytics and HANA

Join us at SAPPHIRE NOW in Orlando at the discussion table titled “Building the HANA Business case” within the Database and Technology Campus. You will have the opportunity to meet with Value Engineering (VE) experts and gain practical information to jump start your HANA journey such as:

  • Value Engineering overview and engagement process – A program that ensure Business and IT are aligned to generate value from IT investments
  • D&T Benchmarking Micro-SurveyHigh level insights into your data warehouse and in-memory strategy
  • HANA overview and value proposition – Insights into the value of HANA and how to engage with Value Engineering
  • VE business case approach using Value Lifecycle Manager – Robust tool to allow customers to build detailed business cases leveraging the power of a global database of metrics
  • HANA Value Calculator – A great tool to help you quickly and easily quantify the value of SAP HANA with example use cases that you can implement today.    
  • Explore the HANA use case repository containing 50+ use cases in 12+ Industries to learn how and where you can utilize SAP HANA for your industry or line of business


I am looking forward to an interesting discussion and hope to see you there!

Vishal Sikka

The SAP HANA Effect

Posted by Vishal Sikka May 3, 2012

By Vishal Sikka

At a recent public webinar, a competitor showed their lack of understanding of SAP HANA and delivered this limited and inaccurate understanding to industry and financial analysts.  At SAP, we normally don’t respond to this type of thing.  But seeing this drama unfold, I got myself thinking - we must be doing something right!  I mean, could a leading and distinguished builder of databases be so fundamentally unaware of next-generation database technology?  Or are they simply unwilling to acknowledge the reality, and therefore are resorting to sharing misinformation?

The HANA team is a group of people with a passion for 3 things in common – make HANA customers successful, have fun, and question status quo. In this spirit, on behalf of the SAP HANA team, I'd like to set the record straight on some inaccurate info from our competition and have some fun while doing so.

The SAP HANA Advantage

I want to start by saying  (a) the comparisons to yesterday’s technological approaches miss the point completely, (b) HANA customers are enjoying phenomenal benefits of modern technology without disruption, and (c) HANA represents a next generation in enterprise computing, especially in database technology.  It is a modern data platform for real-time analytics and applications. It enables organizations to analyze business operations based on large volume and variety of detailed data in real-time, as it happens, eliminating the latency and layers between OLTP and OLAP systems for “real” real-time. The HANA Advantage is a tightly integrated system with different components that are fully transactional and well integrated into the system optimizer. Scale up and scale out work seamlessly for all components like OLTP, OLAP (operational as well as warehouse operations), text, planning and pure application development. It allows easy deployment with no server zoo, no internal replication, no materialized aggregates and no stack of engines!

In my analysis below, I will try to be precise in communicating all this, and I will update this blog regularly to continuously provide the truth on SAP HANA.  Here are some of the corrections.



#1 The Future Design Center for Databases

Wrong: “In-Memory DBMS will not replace many or all relational DBMS”.

Right: In-Memory DBMS is a future design center for databases. It is already replacing large parts of the database market – especially in analytics, planning, simulations and real-time application (e.g. in gaming markets). It is based on sound research in academia and designed to support OLAP and OLTP.  It’s already prevalent in markets where performance is key and will transform the  enterprise markets in the same way for cloud as well as transactional apps.



#2. Role of the Database Platform

Wrong: In-memory database can only do a few things such as  MOLAP, Operational reporting, Query & Analysis, Planning & Budgeting, Unstructured Information Discovery.

Right: SAP HANA is a general purpose in-memory database platform  – bringing fresh data, capturing transactions with full ACID compliance, analyzing them as it happens, doing in database processing, pushing down business, predictive and planning logic into the database and even serving clients such as analytics, cloud and mobile applications. It has mainstream application far beyond the niche use cases being described for an in-memory database. You don’t need to add multiple technologies and duplicate engines into one box for different use (e.g. Endeca – text Essbase - planning, TimesTen - caching, analytics).



#3. Scale Out with growing Data Volumes & NUMA support

Wrong: SAP HANA has limited support for data marts and data warehouses and can’t scale-out.

Right: SAP HANA can scale out to an unlimited number of cores/nodes and hardware prices continue to fall.  At our SAPPHIRE NOW conference in May last year, Hasso already showed 32 nodes /1000+ cores running SAP HANA (at 13.45 minutes into his speech).  Incidentally we have 3 such 1000+ core systems now running around the world.  We have live HANA customers on multi-node scale-out systems and several partners, including IBM and HP, already making scale-out appliances. 

Also, our recent 100TB benchmark runs on 16 nodes of IBM’s X5 servers each with ½ TB of main memory and processes 100TB of BW data in 300ms-500ms for operational reporting scenarios and 800ms-2s for ad-hoc analytical queries.  In addition, data can be swapped out with standard HANA mechanisms such as aging criteria. NUMA architecture is supported in HANA.  On the contrary, public documentation highlights a 1 TB limit on Oracle Exalytics, and they have publicly said a significant portion of this is used for working memory for all the products they have put together (Essbase+Endeca+TimesTen).




Wrong: There is “limited write performance” with SAP HANA.

Right: SAP HANA is a single foundation for OLTP+OLAP on one hardware and one operating system; it scales-up and out (from a mini to 1000+ cores across multiple nodes) and it dynamically adjusts to workloads. We are the only in-memory database with inserts on a columnar store with high write performance  AND high analytical performance. This is a key differentiator for SAP HANA.

#5: Stores – Row, Column, and Text

Wrong:  SAP HANA has “no unstructured data support” and does not provide “row and column compression”.

Right: SAP HANA has row, column and text stores in one database and it natively supports unstructured data.  Furthermore these are integrated and thus simplify transactional and analytic operations across all the stores. In fact, SAP HANA’s foundation was in unstructured search. It handles standard search and text mining as well as text like search on structured data. With Inxight technology also linguistic features like tagging, feature extraction, entity extraction and sentiment analysis will be included in SAP HANA. Inxight is the best text analysis software in the market.

SAP HANA supports heavy compression in column store. Heavy compression is not required for row store, because it is used as a buffer for column store and for compression irrelevant tables only. The benefit of SAP HANA is the intelligent integration with the application stack which makes row store compression irrelevant.



#6. Data Caching & Query Optimization

Wrong: Both SAP HANA and TimesTen do data caching.

Right: Previous generation databases use caching to improve performance.  HANA is a pure in–memory DB based on a new architectural paradigm. Since the entire database is in memory, you don’t cache data in HANA. SAP HANA has a world-class query optimizer that natively enables massively parallel query execution, including inter and intra-operator parallelism.



#7: ACID Properties and Transactional Integrity

Wrong: SAP HANA does not have “transactional integrity/correctness and lacks multi-version concurrency (MVCC)".

Right: SAP HANA is fully ACID compliant, we use permanent storage systems for backup and persistence. It is fully MVCC with regular capabilities like statement level and snapshot isolation.



#8: Aggregates and Materialized Views

Wrong: You need materialized views of aggregated data for high performance

Right: Another Ha! Electric cars don’t need spark plugs. On-the-fly aggregates on detailed data held in memory are much higher in performance. Aggregates are outdated technology now, as it requires a lot of effort to create, store redundantly and manage changes. SAP HANA does not need indices for performance like traditional databases; the whole in-memory database across all the dimensions of data set itself acts like an index.



#9: Business Intelligence Clients

Wrong: SAP HANA provides limited support to few BI clients.

Right: SAP Business Objects is optimized to run on SAP HANA. In addition numerous 3rd party clients are possible today (e.g. Tableau, Tibco Spotfire) and we will continue being completely open to 3rd party BI clients on SAP HANA.



#10: Planning Applications and Analytical Functions

Wrong: SAP HANA has limited support for planning and budgeting applications.

Right: SAP HANA provides complete support for planning applications, many SAP Enterprise Performance Management applications run on SAP HANA. SAP HANA has native planning support inside the DB with the planning engine. Operators like disaggregation, copy and others are part of the relational algebra inside SAP HANA. Additionally we support the SAP planning language FOX natively inside the DB.

Planning is a huge argument for SAP HANA -- not the other way around. SAP HANA does not need the standard cube operations, because we calculate on the fly. SAP HANA includes major analytic & business functions such as math functions, currency conversion, unit conversion, exceptional aggregation, time series analysis, hierarchy handling and predictive functions in its library, and has extended support to other libraries. With SAP BW on HANA, we don’t have layers; we push down planning calculations to HANA DB delivering high performance. SAP HANA will support all transactional apps, business warehouse, BI and all cloud apps of SAP. We also have third-party partners developing planning apps built natively on SAP HANA.



#11. Operational Reporting & Data Sources

Wrong: SAP has limited operational reporting capabilities due to “limited Data Sources” with its replication and ETL technologies.

Right: SAP has an extremely well suited solution of real-time operational reporting from multiple sources (e.g. SAP CO-PA Accelerator); many of them are non-SAP application data sources. SAP Data Services and SAP Sybase Replication server are market leading ETL & replication technologies to bring data from non-SAP and SAP data sources. HANA has an extremely high insert rate for bulk inserts due to massive parallelism. It supports all data sources and has been tested to 2TB+/hour data movement into SAP HANA.



#12. Pricing

Wrong: “SAP HANA is 5-50X more expensive than Exalytics.” 1 TB HANA H/W would cost $362K, and SAP HANA software would cost $3.75 M.

Right: For 1 TB we expect H/W to cost $40-$60K (not $362K) and software to be also dramatically less expensive than touted here. Also SAP HANA is available at price points for different market segments.

It ranges from HANA edge for small-business with appropriate prices (e.g. $12K for a single node H/W + $2K for HANA for SAP B1 Analytics on HANA) to very large scenarios of greater than 100TB of memory.  Customers can also buy SAP HANA for an app (BW, BPC, CO-PA, Smart Meter Analytics, etc.) or for data marts and data warehousing.  BW on HANA for Data warehouse sizes of 40 TB is very competitive.  Also, HANA pricing is inclusive of everything customers need – test, development and QA environments and support. There is no need to buy other software for data loading and movement, storage acceleration (e.g. Exadata) etc. Considering all this – for a 512 GB usage configuration, SAP HANA total cost is approximately less than 50% of the cost of competitor products.



#13: Standards and Openness

Wrong: “HANA only works with SAP tool’s, and has limited or non-standard SQL”.

Right:  A significant percentage of customers use HANA for non-SAP data and use case – it is for both SAP and non-SAP application use cases. It works on standard SQL and MDX and has standard interfaces for any application. It’s open across every layer:

  • Open choice on H/W vendor of your choice bringing new chip level innovations to market ahead of competition
  • Open choice of BI clients
  • Open to all applications and platforms.
  • There are hundreds of custom (non-SAP) applications under development on SAP HANA

e.g. Oracle Apps and Oracle BI run without any changes on SAP HANA. Existing stored procedures in Oracle are translated into SQLscript for IP reasons, which shows complete openness of SAP HANA.



#14: SAP HANA on disk

Wrong: SAP HANA does not support data stored on disk.

Right: SAP HANA supports data stored on disk through prioritization techniques such as Least Recently Used (LRU). SAP HANA can keep relevant data in-memory, and data from disk can be loaded on request.



#15: Query Speed

Wrong: SAP HANA does not execute queries faster than other databases.

Right: SAP HANA keeps all data in column store in integer format and is optimized to take advantage of latest Intel innovations such as CPU developments in vector operations.  SAP HANA’s next-generation architecture and chip level innovations make it faster than any of the competing databases in the market. For example we have 4 customers who have crossed 100,000X improvement on the speed of business process with SAP HANA. The leader in the pack is MKI, showing a 408,000X improvement on retail/logistics data analysis.



#16: SAP HANA is slower than Exadata & Exalytics

Wrong: “SAP HANA does not run faster than Exadata, leave alone Exalytics”.

Right: In one example in a customer’s infrastructure, SAP HANA was 15,000X faster on the credit check and credit limit verification business process on the same data and query on Oracle Exadata. Compare this real-time performance to multiple redundant boxes for transactional, analytical, in-memory acceleration and text processing that have inherent latency in their architecture. We see this in several customers and use this one as example.


The Current Approach in the Market


The New Approach with SAP HANA


#17: Installation and Implementation Experience

Wrong: SAP HANA takes days to install and months/years to implement.

Right: SAP HANA installs within minutes to an hour in a data center. In fact soon you will be able to provision it from our or our partners’ clouds. Provimi has gone live on profitability analysis in as fast as 3 weeks.



#18: Time-travel

Wrong: It is hard for databases to show time-based reports without significant overhead.

Right: With SAP HANA you can do time traversal on your reports (e.g. compare actual vs. predicted by day) and for example use a slider to go through the time axis and reports are constructed on the fly without the need to store separate indices or views.


I have presented the facts and request that you understand the truth behind SAP HANA.  The SAP HANA performance is disrupting the traditional database market. It is a single foundation for OLTP+OLAP on one hardware and one operating system and runs from a mac-mini to a 1000+ core server cluster.  Its technical specifications across attributes we really care about, such as,

  1. Exploding data volumes (yes, scales as you grow and works with disk based stores)
  2. Multi-structured data (yes, including text and machine data)
  3. Real-time analysis on fresh data (yes, real, real-time)
  4. High speed of interaction (yes, at the speed of human thought)
  5. No efforts to tune databases (yes, it’s simpler and cheaper)

These provide orders of magnitude improvements in performance. SAP HANA creates immense business and competitive advantage for companies by revolutionizing their customer interactions, financial and supply chain performance. Customers like Nongfu Spring (22k improvement on Oracle DB) are turning off Oracle.


We are also charting new frontiers, in healthcare, for instance, in revolutionizing genome analysis, or in bringing commerce and real-time banking to hundreds of millions around the world, and in other great challenges of our time.  Times are calling for all of us to go after these new horizons, to not think of the world as an increment of the past, but as something amazing that can be created based on what we know to be possible.  Life is too short for us to be held back by misinformation.

Taulia eliminates waste in the financial supply chain, by taking the friction out between buyers & suppliers in payments processing and terms management.  We create risk free ways for all parties in the payment stream to increase returns on cash.  By implementing our SAP Financials extension, the Taulia Dynamic Discount Platform, buyers and suppliers are able to make real time decisions about early payments, lending, rates and discounts- creating more harmonious relationships, and keeping dollars that otherwise would have been going to the bank to divide between themselves.

With SAP HANA, Taulia analyses millions of buyer-supplier relationships in a split second to find the right financing rate and options for the requesting supplier.  We take into account transactions history, payment cycles, disputes, invoice details and many additional factors.  Before SAP HANA, it was actually impossible to make such decisions in a reasonable time frame in real time. Many (third party financing) companies had to limit themselves to fund only established suppliers, backed by substantial supporting documentation and contracts. Taulia is able to change that game and provide funding to a much larger population of suppliers, at a substantially lower financing rate.  Taulia is able to offer financing at the time the suppliers requests it, not 48 hours later, when the opportunity might be gone.

At Taulia, SAP HANA provides us with the ability to improve decision-making on credit worthiness and financial risk associated with suppliers and their invoices by predicting future outcomes based on large amounts of existing business transactions.  When coupling the crunching of millions of customer-supplier relationships with the power and speed of SAP HANA, we are able to tackle “big data” problems in the financial analytics and prediction space efficiently for the first time.  This will be demonstrated for the first time at our booth, 2516, at SAPPHIRE NOW.

Hiding in the database of millions of transactions waits historical and transactional data that can be used to not only predict credit scores and worthiness, but can also be used to associate risk to suppliers on an individual basis.  The implications of defining the risk profiles assigned to each particular supplier are huge.  Risk analysis based on a supplier’s transaction history, payment history, size, disputes, and invoice volume, taking into consideration their current, projected, and historical transactions is something that has never been possible before.  With SAP HANA, we can sort through and analyze that data in split seconds.

Taulia's SAP-certified, cloud based solution is built with an understanding of the difficulties traditional financing faces, and is determined to help overcome them.  Taulia’s analytics based financing confronts the difficulties of customer-supplier relationships by connecting a third party, with available cash, who is willing to pay early.  Leveraging the SAP HANA platform, Taulia can sort through huge amounts of data and employ predictive analytics to overcome uncertainty associated with each supplier.  In other words, Taulia can structure the most qualified payment dynamics, in an instant.

"The ability to assess the complex risk of funding a supplier in a split second is just something that has never been possible,” explains Taulia’s Chief Product Officer, Markus Ament.  “This opens an incredible new field of financial applications, such as Taulia's Dynamic Discounting and Early Payment solutions, but those are just the start.”  At Taulia, we can now understand more data, more quickly, and more effectively than ever before.  SAP HANA helps identify actionable behaviors for business outcomes and to leverage relationships and financial transactions between suppliers and customers in a quick and efficient way that has never before been possible.

About Taulia
Taulia optimizes the invoicing process. With Taulia, large buying organizations can reduce their total spend and achieve double-digit returns on their cash positions, while their suppliers get paid earlier at a lower cost of capital than alternative options. Taulia's Dynamic Payment Platform offers e-invoicing, instant vendor portals and a sophisticated dynamic enterprise payment management system.  Taulia's SAP-certified cloud based solution extends SAP financials beyond the enterprise. Customers include Coca-Cola Bottling Co. Consolidated, Pfizer, Pacific Gas & Electric, John Deere and other Fortune 500 companies from various industries. Taulia is headquartered in San Francisco, with offices in New York, London, England and Düsseldorf, Germany. For more information, visit our website.

Taulia is a participant in the SAP Startup Focus program. If you want to learn more about Taulia, please schedule an appointment to meet us at SAPPHIRE NOW and add our session, How PG&E Achieved Dramatic Results with Dynamic Discounting and Streamlined Accounts Payables, to you SAPPHIRE NOW agenda. Look for us at the HANA test drive area in the D&T Campus.


Posted by Pan Kamal May 1, 2012

Combating Insider Threat through Security Convergence Technology


Our Critical Infrastructure customers continue to remind us that Insider Threat remains a continuous worry. The Department of Homeland Security has now categorized Insider Threat as an Advanced Persistent Threat (APT). In this post 9/11 world, we are all too aware of security at airports. We are led to believe that all steps are being taken and precautions are in place. The truth is that the insider access at the airport is a vulnerability that can be easily exploited when much of the focus at airports is to counter external threats. Terrorists and other perpetrators recognize this major loophole to security and are relentlessly pushing the limits of security at airports.


To effectively respond to insider threats at airports requires predictive risk analytics and utilization of cutting-edge security convergence technology. We at AlertEnterprise, uniquely recognize that effectively addressing Insider Threat requires analyzing risks across IT Security, Physical Security and Operational Systems like SCADA etc., to safeguard critical assets. Security Convergence has not been so effective in the past. The volume of data and the number of disparate sources of information that can range from structured to un-structured data, tend to scare off the un-initiated. AlertEnterprise possesses the secret sauce to bring such divergent sources of data together and make sense out of it all. So why include SAP HANA in the mix?  Predictive Risk Analytics for security took too long to process.


Going down the path of predicting the occurrence of malicious events with the hope of preventing them didn’t make much sense if the calculation would continue to process well after the incident is over. SAP HANA is used to deliver the computing power and the ability to rationalize large data sets from diverse information sources allows AlertEnterprise to process information from a myriad of identity databases like The Transportation Systems Clearinghouse, No-Fly Lists and HR systems for airports. Fast event detection and event faster response makes true prevention of threats a reality.


Insider threats come in many shapes and forms at airports, but the perpetrator is often the same: an intelligent airport employee. Hidden in plain sight, insider threats pose greater damage to our critical infrastructure, including to our physical, logical and security systems. Insiders have privileged access to airport processes and procedures, access to secured areas, and the inside scoop on an airport’s vulnerabilities.


Airports have continued to expend millions of dollars to employ greater security measures, including tighter security checkpoints, facial recognition software, full-body scanners, access control systems, intrusion detection systems, alarms, closed-circuit monitors / video surveillance and an increase in security personnel. While these measures provide additional layers of security, they only address external physical threats, with little protection against threats that arise from within
the airport organization. Effective airport security requires a multi-faceted approach to address a myriad of threats, both external and internal. 
It is helpful to explore these facets that comprise the spectrum of true security at airports.



Insider threats are a crucial aspect of security that requires a heightened, innovative approach. While Airports have made great strides to secure the ‘front door’ at airports through increased passenger screenings and related efforts, the greatest threat to
airports remains unaddressed. The ever-increasing number of incidents at airports combined with documented studies reinforces this statement. Recent studies and information obtained by DHS, the FBI and other agencies, indicates
that insiders are not only utilized by terrorists to gain access to sensitive information and targets, but also insider themselves are carrying out their own chain of devastation to critical airport infrastructure.


AlertEnterprise leverages SAP HANA to deliver the fastest identification and response to threats preventing the dangers from blended threats that would otherwise go unnoticed. AlertEnterprise is a participant in the SAP Startup Focus program, look for us at the HANA Test Drive area in the D&T campus. To see our solution in action, visit AlertEnterprise (Booth #117). Also look out for us at the Micro Forums in the Startup Forum Area. We can show you why we won the Most Innovative Company Award at the SAP Startup Forum 2012.

News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.


Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions: