Home > Blog > Blog > 2012 > December

In my last blog I discussed choosing service levels in cloud computing that fit the business goals of your projects. I mentioned in cases where you want to take advantage of fine-grained optimizations of a technology such as stored procedures and calculation views in SAP HANA, a service in a public cloud is the right balance of efficiency, agility, and hands-on control.


I’m happy to report that the SAP HANA One and SAP HANA One Developer Editions will soon be updated to SAP HANA SPS 5 for those of you that have been waiting to access the new capabilities of SPS 5 in the cloud. While we cannot commit to an exact release version or date, we are targeting to have both offerings updated by middle to end of January, 2013.


View of Haleakala volcano poking out from clouds from the town of Hana on Maui.
[SOURCE © Forest & Kim, used in accordance with Creative Commons License]


The SAP HANA One and SAP HANA One Developer Edition will provide full access to SAP HANA SPS 5 new features including natural language-based text analytics, and expanded embedded predictive analytics algorithms. It will allow developers to take advantage of the new extended application services that are part of the HANA platform to develop applications that run directly within HANA using HTML5, Javascript, SQLScript, XML/A, JSON and ODATA. Developers can also work with the new enhanced business rules management and the extended application functional library network. For more specifics about what’s new in SAP HANA SPS5, please see the What’s New – Release Notes document.


Improved Administration


In addition to updating to SPS 5, SAP HANA One will be adding an enhanced administration interface. This will provide simplified management and administration for tasks such as:

  • Initial setup and configuration including a click through EULA, easy setup of default passwords and ports
  • Easy access to download the SAP HANA Studio and SAP HANA Client
  • Viewing status of your SAP HANA One instance including processes running, CPU and memory utilization, total time elapsed in system usage.


Expanded User Scenario Potential

The new features provided in SAP HANA SPS 5 when accessed in the cloud via SAP HANA One allow an expanded set of user scenarios such as: For data scientists

  • Short term project instances for data analysis projects, and sand boxes

For application developers

  • Proof of concept porting of applications to SAP HANA using SAP HANA One Developer Edition
  • Development and test instances for developing new applications in SAP HANA using SAP HANA One
  • Customer-specific instances for pilots, sand boxes, implementation accelerators, and production deployments using SAP HANA One

For administrators

  • Create sandboxes for training and evaluation
  • Create test and development instances
  • Spawn databases for special projects, temporary events, for supporting unique access requirements such as different business partners.


Supporting Cloud and Hybrid Cloud - On Premise Scenarios


SAP HANA One supports productive usage of SAP HANA applications in the cloud, up to the size capabilities of the AWS EC2 cluster computer 8XL instance which allows 30 to 32GB of compressed data. You create modern analytics-intensive software services or hosted applications. You can also use SAP HANA One to add elasticity to your on premise SAP HANA implementations for the scenarios listed above.


A key new feature of SPS5 allows export and import of content from HANA systems while maintaining dependences. Combined with the elastic, metered nature of Amazon’s cloud, this provides an agile, cost effective way for for copying systems between cloud instances, or between cloud and on premise systems. For example, a SaaS startup could use this to operate a cloud-based application lifecycle as they migrated content and applications between development, test, golden master, and customer specific instances, spawning, running, suspending, and deleting instances as needed.


Even more to come for SAP HANA One

There’s a lot we’re hoping to achieve in a short amount of time for SAP HANA One, and we expect to be able to provide new updates on an ever more frequent basis. This means you’ll get quick access to newly released versions of SAP HANA in the cloud. We’re also planning to bundle additional software capabilities into the service, support additional SAP HANA-based applications such as SAP Expense Insight on SAP HANA One, and support multiple public cloud providers. More detailed information will be made available as these different enhancements come closer to being released.


If you are a developer and want to get a feel for the development capabilities of SAP HANA, check out the free SAP HANA One Developer Edition.


If you would like to try out the production version of SAP HANA One, take advantage of our limited time offer for a $25 credit on AWS.

This holiday season, SAP took the opportunity to work with the World Wildlife Fund (WWF) to create an environmental campaign that raises awareness and helps preserve a big chunk of land in the Caucasus. WWF rates this area as one of the world’s most treasured biodiversity hotspots in need of protection.


The genesis of this idea came from an app, running on SAP HANA One, that divides parcels of land for users to adopt helping the WWF preserve endangered wildlife and majestic forests. For this campaign, SAP secured 56,000 square meters of land, divvied up into 1 square meter pieces, to give away to anyone who wishes to participate.


It’s easy to adopt your green spot. You’ll need to use your Facebook account and go to http://spr.ly/MyGreenSpot. Follow the steps and be sure to print out your personalized Certificate of Green Spot Adoption once you’ve claimed your spot. You’ll see that the coordinates for your green spot are listed on the certificate just in case you’d like to visit your square meter of land.


This is SAP’s gift to you which you can also share this with your friends!

Backint for SAP HANA is an API that enables 3rd party tool vendors to directly connect their backup agents to the SAP HANA database. Backups are transferred via pipe from the SAP HANA database to the 3rd party backup agent, which runs on the SAP HANA database server and then sends the backups to the 3rd party backup server.


3rd party backup tools can be fully integrated with SAP HANA:

  • Execution of backup and recovery from SAP HANA studio and via SAP HANA SQL commands
  • Configuration of tool-specific parameters via SAP HANA studio


For more information on the integration of 3rd party backup tools with SAP HANA, please refer to the Administration Guide. Installation documentation for 3rd party backup tools is going to be provided by the 3rd party tool vendors.


Which tools are certified? (Updated 2014-10-16)

  • Symantec NetBackup 7.5
  • IBM Tivoli Storage Manager for Enterprise 6.4
  • Commvault Simpana 10.0
  • Hitachi Data Protection Suite 10 (integration via Commvault Simpana Backint interface)
  • HP Data Protector 7.0, 8.1
  • EMC Data Domain Boost for Databases and Applications 1.0, EMC Networker 8.2
  • SEP Sesam 4.4

You can find an overview of the certified tools in the Application Development Partner Directory: Enter the search term "HANA-BRINT" and click on a partner name for further details.


As a customer, what do I need to know when installing 3rd party backup tools on SAP HANA?

Backup tools that use Backint for SAP HANA can only be installed on the SAP HANA box if they have been certified by SAP. You can find further details on this and on the installation of 3rd party tools on SAP HANA in general in the following SAP Notes:

As a tool vendor, what to I have to do to get my backup tool certified for SAP HANA?

A detailed description of the certification process is available here.


Do I have to use a 3rd party backup tool to backup SAP HANA?

No, SAP HANA comes with native functionality for backup and recovery. Using 3rd party backup tools is an alternative to using the native SAP HANA functionality.


For more information on SAP HANA backup and recovery, please refer to the documentation at the Administration Guide.

There are several reasons why SAP HANA works: reasons tied to the advance of information technology; from the economics of IT. The advantages of HANA come from these economics. Let’s see how IT has advanced through the client-server architecture, n-tiered architecture, virtualization, and cloud; and see how HANA fits in.


In 1988 enterprise computing was mainframe-based and a mainframe ran the entire stack: OS, database, application, and user interface (UI) on a single node. Within that stack programmers worked very hard to cut the overhead of hops between the layers down to the level of counting instructions.


However, the power of microprocessors was overwhelming. Therefore, system architects built client-server systems and offloaded some work from the mainframes. The overhead of hopping across the two tiers was enormous, but microprocessor economics made it work. Then the database layer was split out into a 3-tiered architecture; more work was off-loaded and the hop overhead increased again.


By the late 1990’s the n-tiered architecture was developed and it became possible to eliminate the mainframe entirely by replacing it with hundreds of commodity microprocessor-based servers.

While all of this was occurring Moore’s Law was at work as well. Moore’s Law suggests that the number of transistors on integrated circuits would double every two years, yielding a 2x performance improvement with each jump. Even though the use of automated systems was booming the servers that comprised the architecture kept up. With n-tier spreading the workload and Moore's Law driving up the processing power on each server, by the late 1990’s the architecture and the processors matured to solve most enterprise compute problems and the mainframe was done, if not quite dead.

But the business software boom that made companies like SAP and Oracle what they are today continued on and the number of servers required to service the demand went from hundreds to several hundreds in each data center. The server farm was born.


Then n-tier morphed slightly into todays web-based architecture with browser-based clients, the user interface rendered in a web server, applications spread across multiple application servers, and databases in clusters, all communicating over a network.


About 6-7 years ago, an interesting thing occurred. Moore’s Law, with its compounded growth, started to produce processors with computing power that was not immediately consumed by application growth. When upgraded servers with 2x the compute were added into the tiered architecture, all of a sudden the server farm became under-utilized. This caused IT shops around the world to start projects consolidating servers. Then multi-core servers emerged, another 2x increment was available, and the problem repeated itself. The servers IT just consolidated needed to be consolidated again.


With VMWare the servers in the n-tiered architecture became containers that could be easily relocated and the n-tiers could be consolidated without changing the system architecture, but at a cost. In exchange for the convenience of having containers, virtualization added a 5%-20% performance penalty. In addition, because the underlying systems architecture did not change, applications continued to hop over virtual machine boundaries, from application layer to the OS, from the OS to the network layer, over the network to another network layer, to that server's OS, to the database layer, and back. This kludge was convenient but not efficient; and Moore’s Law masked the fat with ever more processing power.


Finally, to bring us up to date, we throw in the Cloud, where infrastructure is deployed to manage the provisioning and placement of virtual server containers. The Cloud moves the fat around with ease.

HANA was designed to optimize out the fat. HANA deploys as much application code as possible in the same address space as the database, deploying both as engines using lightweight threads. Hops are reduced in size or eliminated completely and fat is trimmed. Further, HANA optimizes these engines to share data in-memory instead of by passing data blocks around. And HANA optimizes to the bare metal as data is sent in full cache lines often using special parallel instructions to drive the cores as efficiently as possible.


These optimizations add up. We tend to focus on the micro-level on the reduced latencies associated with I/O from memory instead of from other block devices, but what is really going on is bigger. The sum of these optimizations is often 1,000x, sometimes 10,000x, sometimes even 100,000x of fat removed.


So while we can highlight the many little efficiencies that distinguish HANA from the competition the basis for this radical performance advance is not radical at all. It is simply a fresh start. HANA is a new database:

  • optimized for the current and best high performance computing technology available rather than designed for a hardware architecture that is aging out;
  • deeply integrating, rather than tacking on, the advances in database software that have emerged over the last 30 years;
  • with a few innovations from SAP thrown in for good measure.

If you google "How many Fortune 500 companies are implementing mobile applications", the results are telling... for example Tim Cook the Apple CEO suggests "almost every" Fortune 500 company will deliver BI to iPads and iPhones. The question is how will your company support complex BI queries from mobile clients?


One answer will be to limit the queries to a narrow set of "easy" queries that are prepared or can be answered by a narrowly prepared cube. But these workloads have been running on conventional BI infrastructure for ages and rarely meet the < 2 second services level required by a mobile device (and remember this is 2 seconds to the device not 2 seconds for the query).


The right answer is to provide a 2 second response to most queries, to the most valuable queries, and sometimes to complex queries… not just to the easy queries. These queries are not just from employees they come increasingly from customers. It is not a rhetorical question to ask: how are you going to service your mobile customers?


Technology has constrained BI for a while. The constraints have made it difficult for IT to delight their constituents. IT has had to limit what can be delivered to create narrow applications for customers and to support only a narrow band of queries for BI. Even this can be a strain. Current BI infrastructure is often at the edge, tuned to the max with little room to support new requirements. If you try to add mobile client requirements to this infrastructure it can be costly. The expense of people resources to re-design, and then re-tune to the max, to service mobile clients will be daunting. The time associated with these re-design and tune projects can be substantial and delaying the benefit of mobility to the business.


Have a look at our One Petabyte Test results here. HANA serviced over 100,000 queries per hour simulating 5000 concurrent users with a mixed workload of simple, complex, and very complex queries and the worst performing query had an average response of 3.2 seconds with all other queries completing in under 2 seconds. HANA just runs this fast; this is not a tightly tuned workload designed for marketing purposes; this is a lab where we push to find HANA's limits.


Mobile computing is here now. Enterprises need to solve for this quickly and without risk. With HANA you can throw hardware and software at the problem, not people. Throw HANA at the problem and you gain infrastructure that is fast and agile without tuning, without pre-joined data, without aggregates, without cubes, without large project expenses with every new set of requirements. The benchmarks and customer testimonials demonstrate the unmatched performance of HANA. HANA is platform that can be the basis for BI on mobile devices without strain and without risk.

The SAP HANA  (In memory) database supports a wide variety of interfaces: ODBC (for C/C++ based programs), JDBC (for Java applications), ODBO (for analytic applications), and internally the Python DBAPI and SQLDBC. SAP HANA database interfaces provide the implementation layer between the  database and the application. The supported interfaces components provide a database access Application Programming Interface (API) for their respective language and environment. For some interface components the API is defined by a standards body or an application.


The objective of the certification program is to integrate SAP HANA with 3rd party ETL tools. Once the minimum requirements by the partner ETL tool are met, the integration by SAP is certified. The following interfaces are currently available for partners to certify their ETL tools against SAP HANA.


InterfaceEnvironmentAPI DescriptionVersion
JDBCJAVAJava Community Process (JCP)3.0, 4.0
ODBCC/C++SQL Standard (SQL/CLI), Microsoft3.0, 3.51


Certification Prerequisites

The following system landscape prerequisites must be deployed by the partners in their lab environment for the SAP HANA certification:

  • SAP HANA (SP04 or SP05, latest Revision) appliance
  • SAP HANA Studio and SAP HANA Drivers installed - Latest Revision
  • SAP Data Sources Connectivity ( Ex: ECC, ERP, BW, SCM, CRM etc.)
  • Non Sap Data Sources Connectivity ( 3rd party Applications/Databases)
  • Partner ETL tool (latest release version) landscape configured and connected to SAP HANA via ODBC/JDBC dedicated user.
  • Connectivity to SAP HANA Cloud instance on Amazon Web Services -> For cloud based ETL use case scenarios

Technical Requirements

This section provides a list of the ETL relevant HANA database features and associated requirements for tighter integration between the ETL tool  and the SAP HANA database. A partner ETL tool certified on SAP HANA should enable consumption of these features to the end user for all data replication business scenarios. If the ETL tool has any limitations in supporting these features, they must be communicated clearly during the certification process.

  • Connectivity, Security
  • Read, Write & Update data into SAP HANA db
  • Bulk Data Inserts, Delta Loading Mechanism
  • Administration, Monitoring, Error Handlind & Debugging
  • Mechnism to import data into SAP HANA Repository for all Metadata Objects etc.

The steps for participating in the certification program is to apply through SAP ICC and then work with the SAP HANA Product Management team to validate the above technical requirements & use case scenarios and finally execute the necessary test scripts to successfully complete the SAP HANA certification.

In Episode 3.1, we have introduced you to the basics of Calculated Attributes and Measures. As a follow-up, the current Episode explores more advanced features around this topic. You will get familiarized with the usage of various in-built functions and the steps to create more complex formulas to achieve your desired calculations. We also show you examples on how calculated columns can be stacked on top of each other to create complex calculations.

Enjoy the video:




Note: For licensing questions regarding the usage of this functionality, please check with your Account Executives.

As you start to leverage SAP HANA to build out real time solutions in your organization, be sure to consider how SAP HANA fits into the broader SAP Real-Time Data Platform (RTDP).  This platform vision includes a portfolio of solutions, with SAP HANA at the core, which are tightly optimized to work together to solve modern-day data processing and management challenges.

Many customers have asked for more details on how we will leverage the rich capabilities of our data management offerings coming from SAP Sybase and SAP Business Objects in conjunction with SAP HANA.   The SAP Real-Time Data Platform combines the power of all these solutions with evolving capabilities over time to integrate, optimize and then synthesize these solutions into a unified data platform architecture.  In particular, the platform is focused on delivering the following essential capabilities:  online transactional processing, data analytics, data processing and analysis, data modeling and movement, information governance, as well as unified administration and monitoring.

In order to help you understand how these solutions will evolve over time as an integrated SAP Real-Time Data Platform, SAP has just released several Statement of Direction documents covering various integration points in the SAP Real-Time Data Platform.

Links to these SAP Real-Time Data Platform Statement of Direction documents are available here:


Some of the highlights of the SAP Real-Time Data Platform goals outlined in these documents are the following:

  • The main area of focus for the platform is to ensure “quick time to value”, “simplification”, and “performance and scale”.
  • Data processing: The platform is intended to deliver end-to-end optimized data processing across the following core component engines – SAP HANA, SAP Sybase ASE, SAP Sybase IQ, SAP Sybase SQL Anywhere, and SAP Sybase Event Streaming Platform (e.g., ESP) - with support for third party databases and Hadoop data environments as well.
  • Data movement: Optimization of data movement is planned via real time data replication; high speed query federation and low latency interconnects.  SAP Sybase Replication Server, SAP Sybase Event Streaming Platform, and SAP Business Objects Data Services will be key components of this data movement area.
  • Administration/Monitoring: Common administration and monitoring based on SAP Sybase Control Center via a newly designed HTML 5 web front-end, allowing even mobile devices to monitor server assets.
  • Data modeling: An integrated modeling and development environment is planned based on SAP Sybase PowerDesigner modeling capabilities and supporting the SAP HANA Studio for development purposes.
  • Framework: A warehouse scale elastic framework is planned across data center(s), on premise or cloud-based with multi-tenancy support.


Stay tuned for more details on the SAP Real-Time Data Platform as we continue to map out the go-forward plan. For more details on direction for specific solutions in the platform also visit the SAP Solution & Product roadmaps areas on SAP Service Marketplace at http://service.sap.com/roadmap in the Database & Technology area.



Cookbook delivers recipes: SAP CRM on SAP HANA

SAP has just announced SAP CRM on SAP HANA, an in-memory, real-time customer relationship management system. To help you implement this product, SAP has created a Cookbook.


What is the Cookbook?

To help you get SAP CRM on SAP HANA, SAP has created the SAP CRM on SAP HANA Cookbook. The Cookbook contains information about every aspect of getting this product, from the adoption process to installation, upgrading and migration.

New customers

If you are a new customer, start with the Cookbook for an overview of the technology, business value and adoption paths. Once you are comfortable with the environment, read Deploying SAP CRM on SAP HANA to learn how to prepare for and install SAP CRM on SAP HANA.

Current customers

If you are a current customer, the Cookbook helps you get SAP CRM on SAP HANA by explaining how to upgrade and migrate your current SAP CRM system. Upgrading your current SAP CRM system is important because it brings your system up to the required minimum levels.

How the Cookbook affects your business’s success

The Cookbook is your comprehensive resource for all tasks and questions related to SAP CRM on SAP HANA. The Cookbook helps you get SAP CRM on SAP HANA up and running as quickly as possible.

With the release of HANA SPS05, SAP has opened up the 3rd party BI certification for SAP HANA and customers can look forward to having their non-SAP BI tools certified on SAP HANA. The purpose of this certification is to go beyond just issuing SQL statements and to provide a more integrated solution (like SSO, hierarchy visualization, LCM etc.) when connecting to SAP HANA as a data source since these are common requests that we hear from several HANA customers even when working with SAP’s native BI tools.

Some of the requirements that are included in the certification are

  • SSO thru Kerberos, SAML and pass thru of HANA Analytical Privileges from BI front end
  • Support for HANA Views (Analytical, Calc Views etc.)
  • Turn off caching on BI front end
  • Support for HANA variables and hierarchies.
  • Enable LCM of runtime objects in HANA repository (including non-SAP BI artifacts like reports, dashboards etc.)
  • Integrated error handling and monitoring

We are already working with several BI vendors and planning to have our first certified BI vendor in early 2013 depending on the lab testing.

The steps for participating in the certification program is to apply thru SAP ICC team and then work with SAP HANA Product Management to validate if the above requirements are met and finally execute the necessary test scripts required for passing the certification. The certified partners can be tracked thru a SAP Partner Information Center.

As of June 2014 below are the list of BI Partners certified on SAP HANA BI (SQL/MDX)


Microstrategy, IBM Cognos, Arcplan, Information Builders.


News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.


Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions: