Home > Blog > Blog > 2012 > September

Blog

September 2012 Previous month Next month

The following short video demonstrates how you can use the HANA Modeler to browse through the BW Models (In-Memory Optimized Cubes & DSOs) and import them automatically as Analytic and/or Calculations Views. Once these imported models are Activated, they can be consumed like any other HANA Model, including exploration by SAP BOExplorer. This feature is part of Revision 37 of SAP HANA.




Today we start a new series of Blog Posts that talk about the features & functions in the HANA Modeler. Much has evolved over the past few years and a lot has been tucked away for you to take advantage of. Interesting so, quite a few things are still less-known. To make them well-known, we are introducing a Video Tutorial Series, where we will share short videos (~5 minutes long) that explain these features. After all, a video is worth a million words.

 

I hope you enjoy these quick educational videos and keep us posted about feedback, comments and requests for additional videos.

 

Stay tuned as we start with "Modeler Unplugged". Espisode 1 is on it's way...

Teradata has formulated a position that declared HANA as “hype” and has suggested that SAP is acting irrationally based on a formula that suggests that data warehouses are growing at a rate of 40% per year and the cost of memory is falling at a rate of only 20% per year (they said 30% every 18 months in their post). This is a silly argument in fact when you see how silly you will laugh out loud.

 

Using the Five Minute Rule we suggest here that if there is no compression a table that is accessed at least once every 50 minutes should economically be stored in memory. If the data is compressed 2X then it should be in-memory if it is accessed every 100 minutes, if the data is compressed 4X then it should be in-memory if it is scanned every 200 minutes and so on. Note that this is true regardless of the size of the table and based on the economics of the hardware.

 

Based on price and performance we suggested here that it does not matter how often the table is accessed, it should be in-memory. The arguments for both are based on architecture and are self-evident, furthermore they are architectural statements of fact, not marketing.

 

Other vendors have no legitimate argument against the 5 Minute Rule: it is a Rule.

 

But to be fair, Teradata or others might argue that price and performance is not the right measure. They could suggest that adequate performance on their system is possible for a lower price. This is an odd position for them to take as price and performance has been their mantra, but it is a reasonable point. You can elect to accept sub-optimal performance for a lessor price. We would, of course argue that there is a cost to sub-optimal performance like users are less productive, new real-time use cases cannot be built etc.

 

But for now let’s go with the numbers and suggest a 100TB HANA data warehouse for you because today, based on the two papers, HANA is economically justified.

 

In a year, according to Teradata, your database grows 40% to 140TB and the cost of memory drops 20%. You then re-evaluate the economics and the 20% memory drop makes HANA even more competitive so you stick with HANA.

 

In the following year your data warehouse grows 40% to just about 200TB and the cost of memory drops 20% making HANA even more economically attractive. And so on... You didn’t expect this, did you?

 

Teradata has made the case for HANA for us with their numbers but they spun the numbers with a lack-of-logic that sounds compelling if you don’t work it through. In fact, the economics in support of HANA are truly compelling and this lack-of-logic is easily dismissed.

SAP responds to Big Data challenges in the Consumer Products industry with a new HANA powered application

 

It happens to all of us. We are only one person away from reaching the end of that long checkout lane at the supermarket when out of nowhere the person ahead of us in line pulls out a thick booklet full of promotion coupons and starts flipping through them looking for that 20% off offer on the family size vanilla ice cream. At this point, one question immediately comes to mind: “is it too late to switch to another checkout lane?” However, if you are like me and work in a marketing capacity, you would also start wondering how much effort and work went into creating these promotions and how effective they really are?

 

For years, SAP has been helping Consumer Product (CP) companies plan and manage their trade promotions. SAP Trade Promotion Planning, part of the SAP Trade Promotion Management solution portfolio, is the centerpiece of CP companies’ promotion planning applications helping them translate promotion strategy into actionable plans, plan trade budgets to enable sales teams to achieve their targets and leverage promotion data for analysis and future planning.

 

However, the recent explosion in data has made it increasingly more difficult for CP companies to plan promotions in real time. Latency combined with high level planning on aggregated data has contributed to slower response to market changes and ineffective promotion plans that fail to achieve their return targets. Now, with the new release of SAP Accelerated Trade Promotion Planning powered by SAP HANA, CP companies can address their big data challenges and plan their promotions using more granular data to reach consumers and retailers with customized promotional programs.

 

The new solution replaces the underlying Business Warehouse in SAP Trade Promotion Planning with the in-memory SAP HANA platform eliminating latency and allowing for more granular level planning. Discount, uplift, and volume data can now be instantly manipulated at the day level allowing Sales and Marketing departments to create more targeted promotions around specific short events such as national and local holidays. The granular planning capability extends beyond just the promotion duration to allow for planning at the store and SKU level. This increases the effectiveness of the promotion as they are now tailored and customized towards an individual retailers’ customer base and brand strategy.

 

Storing data at the granular day level also allows for increased accuracy in trade spend allocation and tracking. For example, before powering the solution with the SAP HANA platform, trade spend could only be stored at the week level without incurring high latency. This meant that tracking spend across different time horizons that didn’t align with the duration of the promotion had to be approximated leading to inaccuracy in measuring the promotion’s effectiveness. Now that promotion data, including trade spend, is stored at the day level, tracking can be done accurately across any time horizon.

 

The real time response of the solution allows customers to try out different promotion configurations on the fly to find out which alternative will deliver the best result at the most acceptable cost. Discount and uplift combinations can be entered at the aggregated level and then pushed down instantly to the day level using algorithms based on historic promotion data. Results such as total sales, volume and spend are then instantly calculated allowing customers to uncover new more profitable and effective promotions.

 

If you still want to learn more about this new HANA powered solution you can check out the solution overview video here and the self-running demo video here.

I was asked to comment on a blog posted by Teradata that included a number of inaccurate and misleading statements. Here, I won’t try to address each point, but I’ll hit the first few in the first blog and leave to you question the accuracy of the rest.

 

One important note: It is silly to suggest that in-memory databases have no advantage because they have to persist data. For a write HANA does what all OLTP databases do to achieve performance it writes a log record and then commits. It is sillier still because Teradata is a data warehouse database which generally means “write once and read-many”. The advantage to in-memory is on the read when you run a query.

 

More positively: we agree with the Teradata author that there is a gap growing between processor capabilities and storage capabilities that affect the ability to fully utilize processors. Programs have to wait for disk I/O.

 

But the author suggests that the Teradata shared-nothing MPP approach solves the problem and “saturates” the CPU. This is a mistake. If you run a single query, no matter how complex, you cannot saturate a Teradata cluster due to the exact issue the author carefully introduces. The CPU has to wait for I/O. No amount of shared-nothingness solves this problem. MPP does not help.

 

For Teradata saturation occurs when enough queries are running simultaneously and the operating system is swapping queries in and out to find a query that is ready to use CPU while the other queries are waiting for I/O to complete. The definition of “ready to use CPU” is that the I/O request has completed and the required data is in-memory. In short, the problem rightly raised by Teradata is in no way solved by their architecture or product. HANA really solves the problem by keeping all of the data in-memory all of the time: HANA is always ready to use CPU. In fact, each query can use 100% of the CPU. The problem described by the Teradata author is solved by HANA.

 

The implications of this solution are important. The difference between the time required reading data from RAM or from disk is nearly two orders of magnitude, 100X (see here for the numbers); and it gets better. The time to read data from the processor cache (all data moves from RAM to cache as it is processed) versus the time from RAM is another 6 orders of magnitude, 100,000X. Because of its in-memory design HANA keeps data flowing into the cache. Rather than go overboard and suggest a 10,000,000X improvement for HANA. Let’s modestly say that HANA gets some efficiency from the use of cache and that the benefit of the in-memory architecture yields a 1000X advantage over a disk-based system, 100X by avoiding disk-reads and 10X from cache efficiencies.

 

This is not marketing hype, it is architecture “physics” as the Teradata author might say. In general, HANA will out-perform Teradata by a 1000X on any single query based on this architectural advantage.

 

We also agree with Teradata that a shared-nothing architecture is the proper approach to scale.

 

HANA is built on the same shared-nothing approach as Teradata. It scales across nodes. We have published benchmarks on 100TB databases (here) and are building a 1PB cluster now.

 

Consider the implications of this: Teradata has not solved the CPU vs. disk problem they themselves raised rather they propagate the problem from node to node and try to finally assemble enough inefficiency to solve the problem. HANA scales a series of efficient systems with the CPU vs. disk problem solved.

 

I’m running long for a blog so let me wrap up with a hypothetical that is not about Teradata, it is about the economics of HANA.

Imagine an efficient in-memory DBMS that gets only a 100X performance boost over a comparable disk-based system based solely on the speed of disk vs. memory. Memory must be paid for so let’s imagine that each in-memory node costs 10X what a disk-based node with the same CPU power (this is an huge overstatement and the cost is probably 2X more for in-memory but you’ll get the point). Now let’s deploy two systems with equal performance. You can see what happens: Each in-memory node costs more but for the same performance you need to buy 1/10th the number of nodes. This is the result of the hardware economics described at the Quick Five Minute Rule Update.

 

Teradata is a fantastic product. But it is a product that was architected for weak single-core nodes with little memory. The first Teradata systems I deployed ran on x286 processors. There are other suggestions of this aging architecture here. Teradata’s engineering team is fantastic as well. They have found creative ways, for example the VAMP, to extend their 1980’s architecture as processor architectures advanced.

 

HANA is new and it is architected for the processor technology out today with an eye on the technologies emerging.  The result produces efficiencies that cannot be easily replicated by engineering creativity. The efficiencies are architectural.

 

There is no doubt that someday Teradata will require a technology refresh - it’s been 30+ years. HANA, on the other hand, is a refreshing new technology.

From September 8-12, over 300 Startups converged in San Francisco to reveal an all new slate of outstanding startups, influential speakers and guests. This year, SAP decided to bring the SAP Startup Focus Program and SAP HANA to Disrupt SF sharing the possibilities for Startups to build their solutions using SAP HANA Technology.


199401_499168990094448_781687180_n.jpg

 

The most common question we heard from attendees was “What is SAP doing here?” After some myth busting by Aiaz Kazi they were surprised to learn more about  SAP’s commitment to mentoring young companies as well as SAP HANA’s breakthrough capabilities easily available to the startup community for development, with a simple, inexpensive and effective development accelerator available to all eligible participants.

 

We kicked off the event by participating in a 24-hour Hackathon during which developers built new apps on SAP HANA. 150 teams presented their solutions and the “SAP HANA Hacker of the Year” was awarded to Michael Fischer who developed the Minty Green Energy Machine using energy data available from the government.

 

From Monday to Wednesday, we had presence on the show floor with plenty of traffic and interesting questions around SAP HANA. The SAP team had some great conversations with many startups, enterprises, and the media.

 

Over 300 startups were evaluated and on the last day of the event “SAP’s chief trouble maker” Aiaz Kazi awarded the ‘SAP Big Data Startup of the Year award’ to Tweather (www.tweather.com) which analyzes Twitter’s stream of content to reveal emerging trends and patterns on any topic. This was an amazing end to close the event for SAP.

 

 

 

Talk about an exciting week! Rishi Diwan (VP of Consumer Producst), Kijoon Lee (VP of Innovation Marketing at SAP) and yours truly were invited by the White House to showcase Recalls Plus, a consumer app built on SAP HANA as an innovative app that advances public safety by utilizing available government data in creative and powerful ways.

White House EventwithToddall3ofus.jpg

As part of the Safety Data Initiative of the White House Office of Science and Technology Policy and the US Department of Transportation, “Safety Datapalooza” highlighted innovators from the private, nonprofit and academic sectors who have utilized freely available government data to build products, services and apps that advance public safety in creative and powerful ways.


As we were lining up to enter the White House, I recruited a nice gentleman to take our pictures so we could share glimpses of our day with you, you can imagine my surprise when he got up on stage to present as the COO of Trulia, another app that leverages government data to enrich our lives.

 

It was truly a humbling experience to be up on stage with amazing speakers such as Fire Chief Richard Price of Pulse Point (PulsePoint Foundation ), Jan Withers (Mothers Against Drunk Driving) and the Honorable Kathryn Sullivan (the first woman to walk in Space).


We were honored to share some suggestions from the Recalls Plus Community with senior folks at the CPSC, NHTSA, and the Department of Health and Human Services, and look at this opportunity as the beginning of an ongoing dialogue.

 

As if things couldn’t get any more exciting, I had the opportunity to sit right next to Todd Park the US CTO and the Assistant to the President and shared comments on each presentation as well as potential opportunities for collaboration.White House Event -- Audience (2).jpg

After the morning discussions, we had tables prepared for us to demo Recalls Plus and answer questions from the audience. One of our many enthusiastic visitors included Ian Kalin, a representative from the Department of Energy and fellow sponsor of the recent Tech Crunch Hackaton in San Francisco, who along with SAP, honored Michael Fischer and "Minty Green Energy Machine" for their hacker prize winner. (SAP Newsbyte)

IMG_0728.JPG
If you would like to learn more about Recalls Plus and our activities for the past seven months, check out our video

New SAP HANA solution available to help Telco, High Tech and Financial Services Industries with their biggest business challenges.

As of  September 14th SAP Customer Usage Analytics, the first solution powered by SAP HANA in the Billing & Revenue Innovation Management space is now generally available.

 

As an innovative real-time analytics solution, SAP Customer Usage Analytics powered by SAP HANA will be highly relevant to “Big data” services industries such as Telecommunication, High Tech and Financial Services to help their Marketing, Sales and Service departments address one of their biggest business challenges.

In today’s hyper connected world, the proliferation of data is both a blessing and a curse. Making sense of that volume of data remains to date a difficult proposition, leaving both organizations and business users often frustrated with getting too little information too late. Yet this proliferation offers huge opportunities to those who will be able to extract insight from customer’s activities as rapidly as possible.   In services Industries such as Telco, High Tech, Banking, being able to harness the power of this data will increasingly mean creating competitive differentiation by launching more relevant and faster offers or developing more personalized services to retain customers.

The SAP Customer Usage Analytics powered by SAP HANA solution provides access to fine grain customer’s service consumption in real time to better understand customer usage patterns. The instant access to information makes marketing teams more agile to react to market changes or competitive threats.  Better visibility and understanding of the performance of new offers allows you to adjust them very quickly when needed. Providing more accurate and up-to date insight of customers helps you to personalize sales and services activities to optimize customer experience. Faster financial insights may lead to optimal collections strategy and lower days-of-sales outstanding.

With SAP Customer Usage Analytics powered by SAP HANA you support your Marketing, Sales and Service departments to

  • Enable agile and innovative marketing strategies by providing fast time to action through instant access to large volume of customer data
  • Improve customer experience by leveraging fine grain customer usage and revenue data
  • Enable more efficient financial and collection activities by providing enhanced visibility of billing related financial flow

 

To ensure a quicker time to value, you can get SAP Customer Usage Analytics as a rapid-deployment solution, which provides you maximum predictability with fixed cost and scope. The quick –time –to value implementation methodology enables you to go live in about 8 weeks.

    

      For further information, please see the Experience SAP HANA Customer Usage Analytics Test Drive and Experience SAP HANA Concept Demo.

 

Many thanks,

Stephan

In 1987, and again in 1997, Jim Gray and Gianfranco Putzolu published  famous papers (referenced here) that suggested a Five Minute Rule for managing data in memory. Simply put, the rule is based on the suggestion that it costs more to wait for data to be fetched from disk than it costs to keep data in memory so we can determine how often can you fetch data from disk before it makes economic sense to just keep it in-memory.


Since the number of CPU cycles per second is increasing and the cost of processors is decreasing, as is the cost of memory and of storage, the break-even point changes over time. In 1997 the break-even point was five minutes for a 4KB block, hence the label of the five minute rule. In 2009 these numbers were revisited by Goetz Graefe. He found that for a 4KB block of data, the point where the cost of memory equals the cost of SATA disk storage is even for any data accessed within 90 minutes.


Today, the break-even point is a little lower at about an hour (see here) as disks are faster and memory, although improving in price/performance, has not dropped as fast in price.


Let me point out the implications for HANA. If you have a table, no matter how large, that is touched by a query at least once every 55 minutes, it is less expensive, in raw hardware costs, to keep it in memory than to read it from disk. It does not matter if the table is 200TB; according to the Gray/Putzolu formula if it is frequently accessed it is less expensive to store it in memory.


If the data is compressed then the time it is cost-effective to keep it in-memory goes up with the compression. A database with 2X compression should stay in-memory for 2 hours with 3X compression for 3 hours and so on. Since we have seen compression over 10X with HANA you can see that in many cases any data that is accessed daily can be cost-effectively stored in-memory.


While we often talk about the ROI from HANA coming business benefit... there is also a case to be made for efficiently using IT hardware resources to manage IT costs. The in-memory architecture of HANA provides a most efficient use of hardware and the famous Five Minute Rule postulated by Gray and Putzolu demonstrates the cost-effectiveness perfectly.


Note: the original technical note and the Five Minute Rule was devised in part by Jim Gray, one of the pioneers in database systems. You can read more here.

In April I started looking at HANA as a competitor. Then I was the Field CTO in EMEA for the Greenplum Division of EMC. At that time I posted a blog here on the topic of HANA vs. Exalytics. I would like to reiterate the ideas in that post for this audience. So let’s consider what Exalytics is and what it isn’t, consider what the HANA database is and what it isn’t, and consider where there may be overlap and competition. Finally in the space were the two products might compete, consider where one or the other might have an advantage based on architecture… not on marketing. I hope that you will find this piece fair and factual.

 

To define Exalytics I’ll use the definition offered by a leading Exalytics proponent and an Oracle partner, who participated in the early release program: Ritman Mead. They say: “Oracle Exalytics uses a specially-enhanced version of Oracle TimesTen, Oracle’s in-memory database, to cache commonly-used aggregates used in dashboards, analyses and other BI objects.”  To see that I did not take the quote out of context you can see the entire article here or go to the Ritman Mead Exalytics Test Center here. You can also confirm this yourself by reading the Oracle documentation on Exalytics  although there it is stated in a more fuzzy way. Note that as you read the documentation you will see that the base data has to live in another DBMS instance and it is that instance that needs to derive the “commonly-used aggregates”. So, Exalytics is an OLAP engine that stores cubed data in-memory for fast access.

You might wonder why this cache is required why these basic OLAP queries need an assist? It turns out that Exadata has some issues. More details can be found here in a post by a leading Exadata performance expert. There are two videos at this site. The first describes the problem and the second demonstrates the problem using an Oracle demonstration as the example. I highly recommend both these videos be viewed by anyone who has, or is considering, Exadata.

 

HANA is a full-fledged DBMS architected in-memory based on a column-store with massively parallel threading upon a shared-nothing scheme. I know that this sounds like marketing mush but in these components of architecture lay the differences that are relevant to our comparison:

  • In-memory is what HANA shares with Exalytics. Both systems provide extreme performance by eliminating I/O.
  • Column store provides HANA with the ability to compress the data beyond what can be achieved with just row store. Exalytics offers a dictionary compression scheme they call hybrid columnar compression but the name is misleading. Exalytics is not a column-oriented DBMS. Column orientation also provides HANA with the ability to utilize the processors more effectively by keeping data in the internal processor cache. This can provide a 200X performance improvement over reading from main memory (see here) but we’ll just say that it gives HANA a boost.
  • HANA is massively parallel. This means that on a single query HANA can get all of the cores working. If these products are running on a 4X10 40-core server HANA will run 30X-40X faster on a single query than a single-threaded implementation. Plus the boost from above.
  • HANA is a shared-nothing implementation. This means that you can scale up to solve bigger problems. On a fat server with 512GB of main memory both HANA and Exalytics can store around 256GB of compressed user data (actually Exalytics store less... see here and here). If you need to add more data to Exalytics you are skunked. You can split your cube and route queries appropriately but you cannot join between Exalytics instances and each query can use only the processing of one server. With HANA adding data is simple you just re-partition the data across all of the processors. The change is transparent and each query will use all of the processing power on both servers. Exalytics is not scalable.
  • HANA is a DBMS. You can do whatever is required. JOINs, stored procedures, in-database analytics, whatever is required. In fact, HANA dynamically derives OLAP structures at query time, negating the need for pre-aggregation or materialization.  But to compare apples-to-apples, you could pre-aggregate data into an OLAP structure in HANA and execute the same exact cube queries that Exalytics supports with the columnar boost, with the efficient CPU utilization, with scalability, and with all of the features of a DBMS.

 

If there is a place where HANA and Exalytics compete it is in the processing of OLAP cube queries. Any implication that Exalytics can solve other database problems is misleading. Any suggestion that Exalytics is Oracle’s answer to HANA is misguided. HANA is so much more than a facility “to cache commonly-used aggregates used in dashboards”. Exalytics should be screaming fast in-memory is a powerful architecture but based on the other architectural components HANA will be faster even in the narrow space where Exalytics, Oracle’s OLAP accelerator, plays.

SAP comes to TechCrunch Disrupt  SF to further support its commitment to mentoring young companies and allowing them to explore the benefits of building their solutions on SAP HANA. It’s a great experience to be part of this effort telling 300+ startups, developers and entrepreneurs all about the SAP HANA in-memory database platform.

 

ducks.JPG

 

Many people who stopped by our booth looked impressed, enjoyed our goodies and wanted to know more about the different ways in which the SAP HANA platform is suited for running real-time applications and solving their Big Data challenges. Our presence perplexed many and raised eyebrows to see the SAP brand represented, in particular when I began talking about our consumer facing apps such as Recalls Plus.

 

The morning started with Paul from WikiFun.com a French entrepreneur who has launched a wiki related to fun activities,
visiting our booth and facing a completely new image of SAP.


Throughout the day, Sophie Chou, Aslan Noghre-kar and Ron Wessels spoke to more than 70 Startups,
offering details on the SAP Startup Focus Program. The SAP Startups team discussed how SAP is enabling startups to build their solutions on SAP HANA. In addition, Amit Sinha shared his perspective on SAP at TC Disrupt and has some valuable insights.



We also were discovered by other ‘unexpected folks at a TechCrunch event’, such as Shaun Reid (Digital Realty) and Sanjay Sharma (Director of Strategic Alliances, HP Cloud Services) who have already begun exploring possibilities with HANA and could vouch for SAP’s disruptive ways.

 

gavin (1).JPG

 

Today, we are off to an exciting beginning. Former San Francisco mayor, Gavin Newsom stopped by our booth, as well as CBS News. SAP HANA will be featured at the 5pm and 6pm segments today, don’t miss it out.

CBS.jpg

A couple of months ago, a number of SAP experts, including Vishal Sikka in his SAP HANA Effect Blog, had written posts clarifying what SAP HANA was really all about. It appears that at least one of our competitors did not read those posts, or the many related stories in the print and electronic media. To ensure that our wider audience does not miss the core differentiating factors of the HANA platform, I am taking another stab at describing for our customers and prospects what your HANA advantage is.

HANA represents a next generation in enterprise computing, especially in database technology.  It is a modern data management and processing platform for both real-time analytical and transactional applications. It enables organizations to analyze business operations based on large volume and variety of detailed data in real-time, as it happens, eliminating the latency and layers between OLTP and OLAP systems for “real” real-time. The HANA Advantage is a tightly integrated system with different components that are fully transactional and well integrated into the system optimizer. Scale up and scale out work seamlessly for all components like OLTP, OLAP (operational as well as warehouse operations), text processing and search, planning and pure application development.

OK. So how does this address some of the misinformation being spread out there?

Let us look at some baseless charges being made about SAP HANA by Oracle on their website (Compare Oracle Exalytics and SAP HANA).


Baseless Charge #1: Unproven, Incomplete Solution

Those who have made this charge are obviously not keeping up with the news. SAP is fast closing in on 600 customers for SAP HANA with revenue numbers to validate this growth pattern – and, this is just over a year after it was made generally available. In fact, SAP is “on track to generate at least €320 million (US$403 million) in revenue this yearfrom SAP HANA.

Can Oracle please share numbers to support how well Exalytics is being adopted by the market?

 

Perhaps our traditional database competitors are a little intimidated by the fact that HANA is the fastest growing database management system in terms of market share, on the planet! (Refer: SAP Press Release – June 8, 2012, or 1IDC, Worldwide Relational Database Management Systems 2011 Vendor Shares, Doc #234894, May 2012).

 

This charge obviously insults the intelligence of those who are rapidly adopting this solution – would they do so if it was an unproven solution? Not many who seek to compete with HANA can make claim to have such a rapidly growing customer base. For example, it would be very interesting to know how many customers are in production with the Oracle Exalytics “machine”.

Baseless Charge #2: Few Applications Run on HANA

In slightly over a year SAP has already delivered 25 solutions that run on HANA, turbo-charging our customers’ processes (also refer Featured Solutions at: https://www.experiencesaphana.com/community/solutions).

 

SAP CO-PA Accelerator

SAP Customer Segmentation Accelerator

SAP Finance & Controllng Accelerator

SAP Sales Pipeline Analysis

SAP Collections Insight

SAP Cash Forecasting

SAP Planning and Consolidation on HANA

SAP Sales & Operations Planning

SAP Supplier InfoNet

Banking Financial Reporting RDS

Banking Transaction History RDS

SAP Smart Meter Analytics

Sales Analysis for Retail

SAP Sentiment Intelligence with SAP HANA

SAP Situational Awareness with SAP HANA

SAP Program Performance Analysis for A&D

SAP Accounting for Financial Instruments, option for accelerated processing

SAP Global Trade Services for sanctioned-party list screening with SAP HANA

SAP Business Warehouse on HANA

SAP Business One – Analysis

Recalls Plus

ChariTra

My Runway

SAP Grid Infrastructure Analytics

SAP Precision Retailing

 

 

 

 

 

In this short period of just over a year, there are 60+ unique SAP HANA use cases by industry and lines of business. In addition, there is a whole emerging ecosystem of independent software providers who are leveraging SAP HANA as the platform of choice to bring to life applications that cannot be run using any other database management system.

 

Can the pretenders, including Exalytics, claim to have such traction?

Baseless Charge #3: Actual Performance May Vary

In another example of how some of our challengers are so behind the times, this charge appears to be based on dated marketing information (as far back as 2009). The interesting thing is that SAP HANA, the product, didn’t exist then! The comparison leading to this charge is based on a different product altogether! To enlighten those who might have missed some of the information about SAP HANA’s spectacular performance, here are some authentic sources of relevant information:

SAP HANA Performance whitepaper: https://www.experiencesaphana.com/docs/DOC-1647

EML Benchmark: https://www.experiencesaphana.com/docs/DOC-1769

I would like to issue a counter-challenge – Can the HANA-bashers produce similar published results on scale out, or other performance measures?

Baseless Charge #4: Multiple Vendors, Finger Pointing

The charge is that for support customers would have to go to multiple vendors because no one entity is responsible for it. Really? Do the folks making this charge understand that there is a way for a vendor to provide a collaborative solution to a customer without support problems? Perhaps the only way they know is to work with their own stacks and to lock customers in. SAP HANA is built with a fundamental outlook that looks out for the customer – giving the customer choice. Choice to use SAP HANA with a variety of products and the choice to decide how they are going to manage their data center stack. Unlike our challengers, SAP is not interested in disrupting the business and business relationships of our customers.

Baseless Charge #5: A Band Aid for SAP Software

To quote John McEnroe, “You cannot be serious!” Why on earth would the most successful enterprise applications company in the history of computing have to resort to band aids? As a matter of fact, much of our HANA success is because customers understand that this solution has been produced by SAP because we understand the limitations they have had to live with until now – limitations that are rooted in the shortcomings of traditional database management systems and the cobbled together solutions pretending to provide “true real-time” analytics. Perhaps they should understand that one aspect of “true real-time” is the ability to draw on “real-time” data, as it is happening, without any pre-fabrication. For example, it is clear from public documentation that there is a 1 TB limit on Oracle Exalytics, and from all indications it appears that only about 400 GB of this is actually available for in-memory data (refer to Peak Indicators, 2012an Oracle Education Center and Oracle Gold Partner) since a significant portion is to be used for working memory for the components (e.g., Essbase, TimesTen, OBIEE) that have been put together. The real-time nature of the source data is lost as it resorts to old-fashioned caching to produce an illusion of true real-time. In comparison, SAP HANA does not need to take time out of the process – there is no tuning, no pre-aggregation, and no caching! That, is “true real-time.”

Baseless Charge #6: Risky Changes

SAP HANA has best-in-class high availability and follows industry-standard data center procedures. Our leading hardware partners (names such as IBM, HP, Fujitsu, Cisco, Dell) will tell the skeptics that a SAP-certified high-availability solution for SAP HANA comes with appropriate configuration recommendations for the customer’s need at hand. Here are a couple of authentic sources of information:

HP - http://h20219.www2.hp.com/enterprise/us/en/partners/sap-high-performance-analytic-appliance.html

IBM - http://www-03.ibm.com/systems/x/solutions/sap/hana/

SAP HANA is ready for a fair comparison – How about the competition?

Many of these baseless charges have already been rebutted in a blog posted by Vishal Sikka back in May. But some people are more interested in trying to scare their customers into staying locked-in with their old-school notions of how to handle the challenges of the future. To these folks, I have only two things to state: (A) focus on true innovation and give your customers choices that enhance business value for them, and (B) if you must engage in comparing your products with SAP HANA, let’s agree to a formal benchmark test (e.g., SAP BW-EML benchmark from SAP Benchmark Council) – one that is based on criteria reflecting real customer scenarios and not merely hypotheticals in the lab. SAP is confident on this front because a number of our customers have already done these comparisons with real scenarios and in some cases have made the decision to decommission existing solutions based on other products. Let’s put your doubting minds to rest once and for all, and let’s get on with the business of making our customers successful.

Marc Bernard

SAP HANA for Beginners

Posted by Marc Bernard Sep 7, 2012

(February 2014: Now with more and updated links as well as a section on careers around SAP HANA)

 

Everyone has to start somewhere. And yes, I was a SAP HANA beginner at one time, too (going back to the first days of BW Accelerator). The great news is that we have a wealth of information and plenty of learning opportunities about SAP HANA out there and much of it is available for free!


Are you new to and want to learn about SAP HANA? Congratulations and welcome to the SAP HANA community! In the world of SAP HANA, you want to learn fast, of course. So in the spirit of "Accelerated Learning" (Four Phases of Learning), I suggest a few steps for you to get started. Just understand that you can spend hours or days behind each link below and get lost in cyberspace. Bookmark this blog so you always have a place to come back to continue your SAP HANA journey.


Note: This blog covers SAP HANA as a database and platform. We will have to cover learning about specific applications that run on top of SAP HANA another time.


Phase 1: Explore


As a first step, browse the SAP websites and see what others have done or plan to do with SAP HANA. This can be very exciting especially if have some aha-moments like "Wow, I didn't think this was possible" or "Awesome, I will try this myself" or even better "Spectacular, I will use SAP HANA for my start-up company and become rich".


Phase 2: Learn


By now you should have an understanding of what you can or want to do with SAP HANA and you wish to know more about in-memory technology and the SAP HANA solution. There are many different learning styles. Pick the one that suites you most.


Phase 3: Exercise


If you haven't done so, now is the perfect time to register on two SAP websites that are essential for your success. This will give you access to a tremendous amount of SAP HANA know-how and the ability to collaborate with likeminded 'HANAnauts'. And it's free!


So far it has been all theory and it's time to "get your hands dirty". Again there are various options available from instructor led classroom training to try-it-yourself SAP HANA in the cloud environments.


At this point everything is still pretty new for you and a long URL is better than a short memory. Be sure to bookmark the online documentation or download it in PDF format to your computer (but be sure to check back regularly - at least for every SAP HANA support package stack which are currently released every six months). You also want to save links to the SAP Community Network and this website where you can collaborate with other who have paved the path before you.


You might have questions or get stuck during your project. No problem. Help is just a few clicks away.


Phase 4: Implement


Your company decided to license and implement SAP HANA. Congratulations again! You are officially an SAP Customer (if you haven't been already before) and with it you get access to more resources.


And if your SAP HANA software does not work as expected and you suspect a product error, don't be afraid to report the incident so SAP Support can analyze the issue:


Phase 5: Follow


Wait, weren't there four phases to accelerated learning? Right, but it's a fast paced world and SAP HANA is speeding along quite nicely. It's essential to stay in touch and follow along on one or many of our social media channels:


SAP HANA Careers


Many of you would like some advice on how to turn your SAP HANA knowledge into a career and find a job (and make lots of money of course). There are endless opportunities and depending on your background you might want to take a different path to success. Rather than going into detail here, I refer you to an excellent blog series by John Appleby who put together "The SAP HANA Career Guide". Becoming certified will increase your chance of getting hired. Check it out. If you are ready to apply for a job, you might want to take a look at the interview questions that some have compiled.


Finally, here are some resources for the complete beginners to SAP (That was me about 17 years ago). I like the Blue Book which gives you great insight into the world of SAP projects.


Excellent, you made it through to the end. Overwhelmed? Just start at the top and take it step-by-step and soon you, too, will become an SAP HANA expert. As always we appreciate your feedback. Let us know what's missing or how we can make adopting SAP HANA a more beautiful experience for you.

About 50 startups were in attendance at the 11th global SAP Startup Forum that was hosted in Palo Alto. The Startups showcased their solutions to a group of 200+ attendees from local media, venture capital, thought leaders, SAP and SAP Ventures and exhibited their solutions to SAP employees in a trade show setting.   


SAP Startup Forums are a day of collaboration and learning and our latest event in Palo Alto took place on August 30th focusing on big data trends and technologies as they relate to SAP HANA.

2.jpg

Invited startups contributed to the big data discussions, engaged and learned from SAP technical resources, heard from and met with key SAP leaders, explored customer and funding opportunities, and networked and won ‘bragging rights’ over the course of the day.


The Startup Forum finished off with a mixer held in the SAP Labs cafeteria. The startups set up small booths where they highlighted their products and solutions. SAP employees were invited to hear the startups’ pitches and “invest” fake SAP euros in the company they felt had the highest chance of success. Over 300 million “Euros” were in play and the “winning” Startup, the “People’s Choice,” received over 30 million Euros, and the SAP employees who invested were entered in a drawing for amazing prizes.


3.jpg

The winners:

  • The ‘Best Pitch’ was awarded to Teamly. Teamly helps companies get rid of performance reviews and replace them with a real-time alternative to manage employees all year
  • EasyAsk won the ‘Most Innovative Startup award’. EasyAsk is the leader in natural language solutions making enterprise data easily accessible for business and mobile users.
  • The ‘People’s Choice award’ was given to Datameer. Built natively on Hadoop, Datameer provides a single analytics and BI application for data integration, analytics, and visualization of any data type and size.
  • Two startups, Numenta and Greenlight Technologies, took away the ‘Best Big Data Startup’ award.

 

We would like to thank everyone that attended the event as well as the great volunteers that helped us make this event an amazing experience. If you know of startups that might be interested in participating in a future SAP Startup Forum they can register their interest by completing the following online form.

Last month, SAP hosted its first SAP CodeJam events in China. The events focused on SAP HANA and were held on Aug. 16th in Shanghai and Aug. 22nd in Beijing. The main objectives of the events were to offer support to the local online developer community through an offline, face-to-face, hands-on coding experience, and enable developers to learn at their own pace based on the information that SAP experts shared with them at the beginning of the event. Participants were given a very simple application example with backend data storage & processing (HANA) and data representation (BI/Dashboard). The application used HANA as the analytic platform to process personal spend transaction data. They also received a self-learning guidebook with detailed steps to successfully build their applications.


We had a maximum of 100 slots available in total for the two events for which we received over 250 requests for registration. Here are some pictures from the events:

The example data and the dashboard template file can be downloaded from here.

The English and Chinese versions of the self-learning guidebook can be downloaded here:


As the HANA community is growing and we have more and more on-line learning materials, the self-learning method is important to help developers train themselves. We will develop multiple cases with detailed guidelines and explanations to guide developers to build their applications. We hope this can motivate developers to learn and think about the development of in-memory applications. We strongly suggest developers to expand and add more features to their sample applications and share them with others on the on-line developer community.


The SAP CodeJam events are taking place around the world. LikeSAP CodeJam on Facebook, follow SAP for Developers on Twitter, and bookmark the SAP CodeJam page on SCN to stay up to date with future events. For information about SAP technologies as well as tools and resources available to developers, visit http://developers.sap.com.


Note 1: SAP provided free HANA server instances through Amazon Web Services. The HANA related modeling and coding tasks in the self-learning guide can use this free HANA server instance from AWS. For more details on how to get the free HANA instance, please check this link. For installing the R environment, please read this blog.


Note 2: The data representation used in the example for the event uses BoE and Dashboard which are not provided by the AWS environment - you will need to find a BoE environment.

News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.

Ask SAP HANA

Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions:

Χ