Home > Blog > Blog > 2012 > November

Microsoft has announced a new in-memory OLTP product code-named Hekaton that will be available in 2014-2015. In addition they have introduced some new columnar capabilities in SQL Server 2012. The industry analysts and Microsoft themselves are positioning these products against HANA.

 

There are lots of technical reasons to believe that HANA is a far superior product today than what Microsoft has announced will be available a few years out. The SQL Server products require a batch process to build the index that drives the new columnar features, the OLTP product does only OLTP so a redundant copy of the data is required. There are odd SQL limitations that require hand-tuning to get it all to work and so on. But in this blog I'd like to focus on a bigger, more strategic, more important distinction: the Microsoft products today and upcoming are so old school, so 1990's, maybe even 1980's.

 

For the last 30 years the DBA has been the guru making the business hum by tuning query syntax, applying indexes, building cubes, and moving data from OLTP databases to operational data stores to data warehouse databases to data marts and back. They were the consummate engineers working under serious constraints imposed by the technology. Every time a new business function came on-board the DBA was called in to re-architect the eco-system to somehow make it all work. Every query had to be approved so that nothing broke the fragile system... and ad hoc queries were often forbidden or only run in exceptional cases. If you wanted to perform some mathematical analysis on the data you needed to export the data to yet another "analytics" mart or SAS data farm. These disparate systems became referred to as "stovepipes" and stovepipes propagated until the landscape looked like Mary Poppins' London.

 

For the BI/Data Warehouse part of the eco-system the shared-nothing architecture created an environment where you could consolidate some of the stovepipes into a single house with lots of fireplaces (Ok, I'll give up on the stovepipe metaphor here). Shared-nothing gave us the basis for "big data" databases. Shared-nothing lets us grow a cluster as the business needs grow whether that means more data, more users, or more sophisticated queries.

 

The starting point for HANA is based on the recognition that the current and upcoming hardware technologies are capable of solving for all of this in a single database instance if only the database was re-written to fully utilize the hardware. HANA will support OLTP, OLAP, and Analytics workloads simultaneously. The starting point for HANA is the idea that there should be no tuning required, every query should run fast without an index and without pre-aggregation. The starting point for HANA is that replicating data all over the enterprise should be the exception not the rule. The start is to remove the constraints.

 

You should see the numbers coming from our Petabyte Cloud project (here). We emulated 5600 concurrent users running a mixed workload against a 1000TB database and the complex queries returned in under 2 seconds. SQL Server users are excited when complex queries complete in under 6 seconds on a tiny system after running an index-build that takes an hour (here). And this was after tweaking the SQL syntax with a deep SQL tuning expert in the loop.

 

The Microsoft announcements are all old-school. Six second queries don't work when your users are querying from mobile devices. Single-node database technology does not work in the age of big data. Query tuning will never keep up with the needs of the end user community. Microsoft also announced a new version of their shared-nothing SQL Server and they announced a Hadoop interface. So their view of the world requires an in-memory OLTP database with the data moving to a shared-nothing data warehouse (they were about 25 years late with shared-nothing) and then again to a data mart where a columnar index will be built to enable BI and analytics. It looks like 1995 in 2015.

 

Microsoft points out (see here) that one advantage they have is that SQL Server is  "the data platform that customers are already using". Exactly! This 1995 architecture was designed for single-core x486 systems with 256MB of RAM. If you want to keep up it is time to move on. SAP has.

The introduction of the SAP HANA Extended Application Services alongside the SAP HANA data base has presented a unique opportunity to provide customers with a solution that leverages the proximity of the database and application server, greatly simplifying the technical system landscape. HANA now represents a complete application platform.

 

This combination provides an opportunity to explore how in-memory technology can be used to not only transform enterprise applications, but also how they are developed.

 

Over the last 12 months we have spent extensive time at customers and partners understanding this. It turns out that the number one barrier to business satisfaction and developer productivity is the impedance that exists between the various roles and layers that are involved in the application development process. For example, at one of our large partners there are 9 distinct, specialized roles between the definition of business intent and the deployment of an application.

 

Much of this complexity is due to the fact that traditional application run-times are performance bound. In order to achieve optimal execution, the application’s code undergoes careful optimizations – data load and transfers, query optimizations, etc. - to work with the underlying engine.  Such optimizations do not add business value as they have nothing to do with the realization of business requirements in code. We have completely rethought this process based on the new capabilities enabled by HANA. The result is a simplification of the development process because we can now eliminate such optimizations from appearing in the code. 

 

In short, it is now possible to cleanly separate intent from optimization, meaning that we can now develop high performing, native in-memory applications with tools that are incredibly simple and easy to work with. The speed of HANA also enables developers to see the impact of their code as they write it, and share the results with end users immediately. Tasks and activities that have traditionally required handovers across roles or domains can now be consolidated in one hand.

 

We also realize that business applications live beyond the technology and its progression. We want to allow developers to build an application in such a way that will allow them to transparently leverage technology improvements. The corollary is that an application developer should worry less about the technical application setup and its optimization, and more on the application’s intent.

 

So how do we decouple optimization from intent?

 

Enter RDL – the River Definition Language.

 

RDL is an executable specification language that allows development teams to collaborate and specify what applications do, while staying away from how the application’s requirements are realized; all this, without compromising on application performance and robustness, on top of HANA.

 

RDL is what stands at the heart of the River Development Experience – a  next generation development experience targeting various development scenarios and needs in HANA. RDL aims to provide a comprehensive solution for specifying a complete business application, covering almost all essential core aspects: data model definition, business logic, access control, validation, error handling, etc. It is constructed as a coherent set of DSLs that work together to provide an easy to use textual specification of the language; all this while being automatically compile-able into the HANA execution runtimes, namely the HANA XS and the HANA Calculation Engine. So on top of being readable and easy to consume, the technology easily lends itself towards allowing a low learning curve for HANA.

 

In order to achieve this, we adopt several principles that manifest themselves in the language design:

  1. Readability is key: expressive and simple language constructs that enable higher level concepts to be easily specified.
  2. Declarative nature: focusing the developer on capturing the application’s intent, rather than its execution mechanisms. Conceptually, it is similar to the idea of standard SQL, only applied to a broader (and more complex) domain.
  3. Coherency: different aspects of the applications are specified in a manner that allows interoperability. RDL conveniently leverages the underlying HANA data definition and query languages to interoperate with additional syntax specifying further semantics like business logic, validation and access control.
  4. Flexible modularization: support for different development working modes and scenarios – from small focused teams working rapidly on well-defined applications, to large scale applications with numerous developers working iteratively to prototype and refine the application’s definition.This includes separation of concerns in the application’s code, as
    well as iterative application specification.
  5. Openness: allow seamless interoperability  with existing assets, e.g. legacy code and existing database tables. Also allow for integration of services and libraries developed using other languages/technologies.

 

Having a language that adheres to these principles, allows the application specification to be more succinct and readable, resulting in a considerably smaller bill of materials. This in turn results in easier maintenance and simpler lifecycle management of application code. Being declarative allows the application’s execution (its “runtime”) to improve transparently with the underlying technology. The declarative nature of RDL “shields” the developer from technology changes, essentially distilling the application’s real assets: its data model with its associated behavior and related business scenario specific information.

 

Of course, some scenarios require very technology-specific coding or knowledge. RDL enables that as well – allowing the developer to “break out” into the underlying technology and applying optimizations or useful techniques in a specific technology. It’s important to note that RDL does not constitute a complete stack of its own. RDL is purely a design time “creature”, which is compiled to underlying runtime containers – Javascript in HANA XS and SQLScript in the HANA context. There is no independent RDL “runtime” or virtual machine of any kind. There’s no artificial layering or abstractions introduced into the application’s runtime. This allows us to avoid unnecessary translation and data conversion that is usually introduced as a result of abstractions created in different stacks.

 

For example, think of an application using ODBC/JDBC when reading data from a data base and the amount of data conversion/translation carried out only to deliver the data from a DB server to an application server. Having the application’s RDL specification compiling into the necessary containers makes this translation redundant.

 

Additionally, given that the entire application is coded in a language that encompasses various aspects of the application, the RDL compiler can more easily optimize across these aspects. A feat that is considerably harder when we have, for example, distinct languages to express control logic and data queries and when interoperation occurs only at runtime.

 

RDL and RDE are fully integrated with the recently released HANA Extended Application Services platform and compliment the JavaScript and SQL Script development experiences that are available with HANA. In fact, RDL compiles down to JS and SQLScript and provides substantial benefits on top of these approaches. With regards to SQLScript, RDL lowers the barrier of entry for development of HANA native apps, and does not require that a programmer have extensive database development
experience.  JavaScript on the other hand is an excellent language for general application development but lacks certain features that make enterprise application development easier, such as access control and embedded query and business semantics. That said, RDL can elegantly consume pre-existing JS and SQLScript artifacts.

 

To summarize, developing an application using RDL has several key benefits over traditional technologies and development models:

 

  1. Easier and faster development and maintenance:
    1. Declarative, focusing on application intent
    2. Expressive language constructs
    3. Flexible code specification, enabling easier separation of concerns and iterative refinement of application code.
    4. Smaller bill of materials – coherency across different layers and components of the application.
  2. Easily leveraging HANA’s power, while remaining agnostic to underlying technology containers (XS, SQLScript).
    1. Can leverage any underlying supported runtime container, without compromising on running time optimization.
    2. Application execution improves together with the underlying technology, transparently; taking advantage of new
      capabilities.
  3. Open to legacy and extension code, in all supported containers.

 

RDL, combined with the River Development Experience (RDE), results in a superior application construction experience when compared to traditional application development technologies. A combination that leads to lower development and maintenance
costs, without compromising on optimal execution.


Lior Schejter, Jake Klein


The recent US financial meltdown and the unfolding of the European debt crises have added more pressure on banks to develop deeper visibility into their liquidity risk profiles and meet new tighter regulatory requirements.

 

A fundamental problem banks face when measuring liquidity risk is dealing with the massive amounts of cash flow transactions that need to be constantly tracked and analyzed. Legacy BW architectures and technologies have made it impossible for banks to develop real time insights into these cash flows. The time gap between recording the cash flows and analyzing them meant that banks had to rely on outdated information to understand something as critical as their liquidity position and ability to fulfill their future financial obligations.

 

Moreover, the recent Basel III regulations that were introduced require banks to subject their cash flows to different stress scenarios to better understand how their liquidity risk positions would be impacted under different market conditions. This made it even more critical for banks to have a lightning fast solution that is capable of analyzing these massive numbers of cash flow transactions according to various predefined simulated scenarios.

 

SAP has recently released-to-customer a new HANA solution that will help banks develop real time visibility into their liquidity risks and take corrective actions to address any cash flow gaps that might impact their liquidity positions.

 

SAP Liquidity Risk Management is a new HANA powered application that provides banks with the ability to perform real time, high-speed liquidity risk management and reporting on large volumes of individual cash flows. The solution makes it possible, to instantly measure key liquidity risk figures, such as Cash Flow Gaps and the Basel III Liquidity Coverage Ratio,

The solution also allows banks to apply different stress scenarios to gain a deeper insights on how market volatility can impact their Forward Cash Exposures. Banks can also take corrective actions through calculation of the Counterbalancing Capacity to resolve potential liquidity bottlenecks.

Other solution benefits include:

·         Calculate global liquidity risk profiles on hundreds of millions of cash flows within seconds

·         Support for Key Basel III ratios

·         Ability to predefine stress scenarios through adjusted run-off rates and bond hair-cuts

·         Seamless drilldown functionality into cash flows by country and product at the day level

·         Reduce borrowing and funding costs with real-time liquidity visibility

 

You can read the solution brief and watch a short overview and demo video about the solution here Link

SAPPHIRE NOW + SAP TechEd, SAP’s exciting premier event in EMEA, is here. As part of our innovation roadmap for customers, SAP is announcing SAP CRM now powered by SAP HANA as well as 6 new applications, all powered by SAP HANA: SAP Demand Signal Management, SAP Liquidity Risk Management, SAP Accelerated Trade Promotion Planning, SAP POS Data Management, SAP Customer Usage Analytics, SAP Operational Process Intelligence.

 

This simply means that only 18 months after the release of the SAP HANA platform more than 30 solutions powered by SAP HANA will have been launched to help our customers transform their industry and create new differentiating value. I wanted to share with you more details on some of these new applications for lines of business and industries that we announced today.

 

SAP CRM Powered by SAP HANA

This is the first CRM application in the market bringing together transactions (OLTP) and analytics (OLAP). The application will now allow SAP CRM customers to accelerate dramatically daily customer-facing operations (e.g. search can be accelerated up to 115 times faster), empower business-users with real-time customer insights, and innovate the way companies engage with customers for more personnalized interactions on any device. Lenovo is a great example as to how CRM powered by HANA will be a game-changer in marketing, sales and service. Lenovo is a US$29 billion personal technology company and the world’s second-largest PC vendor. The company has actually been an early adopter of the SAP HANA platform. Xiaoyu Liu, Vice President, Global Application Development, made the following comment at SAPPHIRE today: “Knowing exactly what is happening in our business at any moment and being able to respond quickly to changing market conditions is critical for the success of Lenovo. SAP HANA has proven to be a high performing platform to help us achieve this goal. That's why we are exploring to supercharge Customer Relationship Management with SAP HANA. We have already seen 30 times faster performance in key processes during the customer validation phase. The first results are very promising to allow our business not only get 360 real-time business insight but also significantly accelerate all our customer-facing operations.”


SAP Demand Signal Management application

SAP Demand Signal Management is intended to help companies across multiple industries capture external market and downstream demand data, in near real time, and integrate it with their internal business data to drive insights and better decision making across the entire enterprise in areas such as sales, marketing and supply chain. The application will enable companies to leverage downstream demand signals, market research and consumer sentiment data to develop deeper market visibility and respond faster to market changes.

Watch the video here: YouTube Link

 

SAP Liquidity Risk Management application

SAP Liquidity Risk Management is aimed at providing banks with the ability to perform real-time, high-speed liquidity risk management and reporting on very large volumes of cash flows and instantly measure key liquidity risk ratios - such as the Basel III Liquidity Coverage Ratio and Cash Flow Gaps - to take corrective counterbalancing action and meet financial obligations. The application will allow banks to apply different stress scenarios- such as adjusted run-off rates and bond hair-cuts- to gain a deeper understanding of how market volatility can impact liquidity positions.

Watch the video here: YouTube Link

 

In addition to these announcements, we have many new HANA customer testimonials planned at the event. As an example, I will be moderating a panel discussion tomorrow with four customers: Lexmark, Smart Modular Technologies, Vodafone, Ypsomed. Ypsomed is a worldwide leading independent developer and manufacturer of injection systems for custom-made self-administration and supplier of pen needles for the treatment of diabetes, growth disorders or infertility, as well as for further therapeutic areas. Ypsomed employs approximately 1050 employees so pretty small organization compared to Lexmark or Vodafone. As we were preparing the panel discussion, Fernand Portenier, CIO at Ypsomed told me why SAP HANA was relevant for their small organization:“Ypsomed is now benefiting from near real-time business insight that previously took a full day to obtain. Business users are less dependent on IT and can access reports directly – creating a more agile environment that can respond instantly to market needs.” It will be an excellent opportunity for us to discuss further the user adoption topic with HANA.

Another new testimonial in this panel will be Lexmark. Lexmark International, Inc. provides businesses of all sizes with a broad range of printing and imaging products, software, solutions and services. Victor Rivera, Business Intelligence CoE Lead at Lexmark, will be covering the Big Data topic in this panel discussion and explain us in more detail his view as to how "SAP HANA makes Lexmark more agile by fundamentally reducing our project life-cycle and increasing our ability to analyze Big Data."

Finally, I will also co-present a session with Marco Dockweiler, Enterprise Architect at BSH Bosch and Siemens Homeapplicances, on a new HANA co-innovation project. The customer will describe how BSH Bosch und Siemens Hausgeräte GmbH co-innovated with SAP on the new SAP Cash Forecasting application , powered by HANA, to help improve cash forecast accuracy and optimize processes in the treasury department.

Make sure you join us for these sessions (physically or online) to learn more on these amazing HANA innovations and customer stories.

 

As usual, I look forward to reading your thoughts and comments.

 

Si-Mohamed SAÏD – VP Database and Technology for Lines of Business and Industries

A new HANA powered application: Enabling Demand Driven Supply Chains with SAP's Demand Signal Management

 

It is well recognized that the further upstream you are in the supply chain from the end consumer, the lesser visibility into end customer demand you have. The latency and the lack of granularity in the demand information as it gets transmitted through the various layers of the supply chain to you, exacerbate the situation. For a manufacturer, not only it becomes difficult to accurately forecast, plan inventory and establish responsive replenishment policies in tune with the ever changing customer demand, but understanding how their products are selling and how effective their sales and promotions tactics are, becomes extremely challenging. Often, by the time deviations are detected and adjustments are made, the opportunity has passed you by. Hence having a good handle on the evolving demand situation at all times is a high priority for manufacturers today as they aspire to elevate from being forecast-driven to becoming demand-driven.

 

For years, SAP has been helping manufacturers plan and manage their supply chains with supply chain management and customer relationship management solutions. Now with the introduction of the new SAP Demand Signal Management solution, powered by SAP HANA, manufacturers will be able to get a holistic view of how customer demand and sales are materializing downstream in near real time. SAP Demand Signal Management allows manufacturer to capture large amounts of downstream data like sales and inventory data from their retailers as well as market research data from syndicated data  sources and combine it with internal data to create powerful analytics to instantly analyze and derive insights. In addition, given the increasing role that social media plays, one can combine social sentiment indicators to develop a comprehensive view of the demand. Such information can also be used to predict future trends. Armed with such powerful capabilities, manufacturers can ensure that inventory levels, replenishment, sales and promotions are aligned with end consumer demand and thereby maximize revenue opportunities and reduce costs.

 

SAP Demand Signal Management addresses several challenges that manufacturers face today as they figure out their demand driven strategy. First, the point-of-sales data they receive from retailers varies in granularity, quality and periodicity by retailer. The nomenclature like product and location IDs are different than what they use internally. SAP Demand Signal Management provides a robust foundation to collect, cleanse, harmonize this data. This is usually a very time consuming and manual process. SAP Demand Signal Management automates the process, quickly identifies any mismatches on an ongoing basis, and limits these to one time setups so that the system automatically leverages the rules during any future occurrences of these mismatches – it is like a self-learning mechanism. Secondly, manufacturers spend significant amount of funds to subscribe to market research data from various sources. SAP Demand Signal Management helps you to integrate this data with retailer data and internal data and enrich it to be able to make it useful for rich analytics and enrich their supply chain, sales and marketing
business processes. An open and extensible framework allows the rules for quality check, harmonization and enrichment to be easily extended based on ones needs. Furthermore, as social media continues to gather steam, it is becoming more and more important for manufacturers to keep a finger on the pulse of the end consumer sentiments. SAP Demand Signal Management allows you to include social sentiment indicators to bring in added dimensions to the insights.

 

All this amounts to having to deal with very large volumes of data that needs to be processed, accessed and analyzed quickly. SAP Demand Signal Management leverages the powerful, in-memory capabilities of SAP HANA to provide the processing speed required for such a solution. With the rich set of consumable data in SAP Demand Signal Management, several state-of-the-art analytics are provided to address the needs of various roles in supply chain, sales and marketing business functions.
With investments and expertise that companies already have in SAP BW and SAP BusinessObjects, they can easily extend the analytics capabilities to meet their evolving needs. Further, one can develop targeted mobile applications leveraging the data in SAP Demand Signal Management to address the needs of specific roles.


With a holistic view of downstream demand, manufacturers can make well-informed decisions. Some of the expected benefits that SAP Demand Signal Management would bring are deeper market visibility, ability to respond faster to changing market needs, increased forecast accuracy, better execution of sales and promotions tactics, improved new product launches and fewer out-of-stock and out-of-shelf situations. All these benefits result in not only increasing revenues and reducing costs, but
also fostering stronger collaboration with the retailers given the mutual benefits to both parties.


To learn more about SAP Demand Signal Management, check out the solution overview video here and the Solution Brief here.

If you listened to Vishal Sikka's keynote presentation at SAP TechEd Las Vegas, or if you saw my previous blog on this topic, you already know something about the incredible results we have achieved with a petabyte of raw data in SAP HANA. Well, I promised details would come, and here they are!

 

In Intel's co-location facility in Santa Clara, California, we installed 100 IBM servers in a single SAP HANA cluster, and loaded 10 years worth of Sales & Distribution data, at 330 million transactions per day. This data model is similar to what many of our SAP NetWeaver BW customers use. In our test case, that worked out to 1.2 Trillion rows of data in a single fact table, partitioned across the entire cluster. Complex BI queries running against this data set ran mostly in less than a second! Not only that, this was performed with no secondary indexes, no materialized views, no aggregations. In fact, many of these queries, such as those involving sliding time window comparisons, can't have aggregates built to speed up the results, and so are virtually impossible to support with acceptable performance on traditional, disk-based databases today.

 

As soon as the data is loaded, users can start asking whatever questions of the data they want, and they will be returned with tremendous speed. No need to ask the question ahead of time so that DBAs or Developers can build structures to speed up the results. No waiting for re-indexing or caching. Just breakthrough performance and scalability, and this is the proof. And, thanks to improvements by the SAP HANA development team, these results are almost identical to, even a little faster than, the 16-node testing performed with 100 Billion rows / 100TB of raw data back in April. Same performance with a ten-fold increase in data volume - that's true, linear scalability.

 

Here is a summary of the results:

HANA1PBSummary.png

 

Watch Wes Mukai, VP of Systems Engineering, and Boris Gelman, VP of Development, talk about the cluster, the testing, and the results:

 

For the full white paper, please see the attached document "SAP HANA One Petabyte Performance White Paper.pdf"

 

And stay tuned for more amazing results, as our cluster is increased from 100 nodes to 250, using servers from both IBM and HP.

Ever wondered how to define Calculated Attributes and Measures for HANA Models. This short video shows how to define basic calculated columns and how one can create, edit and hide them from Data Preview. In the following episodes, we will post examples of how to create more complex calculated columns. Watch this video to get started!

 

Do you know enough about startups at SAPPHIRE NOW Madrid? Check out this short video and plan now to connect with innovation coming from outside the standard SAP channels. Your plans for SAPPHIRE NOW Madrid really should include time spent in the Startup Demo Area in hall 10 and the TechEd Expert Networking Lounge #6 to learn more about the inspiring solutions built by SAP startups and SAP HANA.


We have a Chinese vesion of this SAP HANA One Exercise blog.


Have you heard of the SAP HANA One? SAP has just announced the offering aiming to provide database-as-a-service in the public cloud.  There are two type of SAP HANA One types: SAP HANA Developer Edition and SAP HANA One Business Edition. And the services are provided by multiple public cloud service providers.  You can find the different providers and instructions from this blog: http://scn.sap.com/docs/DOC-31722.  Right now, we have four providers: Amazon Web Services (AWS), KT ucloud biz, SmartcloudPT, and CloudShare.

All of them offer the SAP HANA Developer Edition for personal usage with a free development license.  You can find more information from this blog.  

 

And so far, we only have SAP HANA One Business Edition from AWS, which is a single instance hosted on AWS that can be instantly certified for production use.  If you have more questions about SAP HANA One business edition, please check the FAQ on saphana.com. If you are looking for more details, please visit the SAP HANA One website and this SAP HANA One forum.  Remeber, there is a fee associated with using AWS charged by Amazon.  Please read this page for the fee structure to use EC2.

 

Based on AWS offering, the following tables explain the difference between two SAP HANA One editions on AWS.

HANA One.jpg

In this blog, we want to provide you a simple SAP HANA One Exercise.  You can try, experience, and learn SAP HANA from this Excise.  For this purpose, we recommend you to use the SAP HANA Developer Edition.  You can choose any service providers listed in that blog, and follow the instruction in this blog to create your own SAP HANA developer edition instance.  But there will a fee involved to use these services, besides the free SAP HANA developer edition offering by KT ucloud biz. 

 

If you want to choose AWS services, either SAP HANA Developer edition or the SAP HANA One Business Edition, SAP offers a $25 dollar credit to help you going through this simple example.  It allows you to use the system for 5 or 6 hours, enough time to finish this application. 

 

NEWS: Now, we can have a 100% free SAP HANA developer edition on KT ucloud biz. They provide free SAP HANA instance until Feb. 2013.  Please follow up this blog for the detailed instruction.

ucloud.png

 

Exercise:

  • Following the process to create your HANA developer edition instance, configure your instance, download SAP HANA studio, and connect your SAP HANA studio from you instance.  After you created your own SAP HANA developer edition instance, go to this link, where you can find the downloads for the SAP HANA studio and the SAP HANA client.

   download.png

  • Once HANA is running and connected from modeler studio, start your exercise and copy the result.  Please follow the attached pdf ("HANA_Exercise.pdf")!  The exercise includes all detailed steps to build a model in your SAP HANA developer edition instance and all the SQLScript source codes are included in the attached ("code.txt") file.  Use the codes in this file.  Do not use the code in the PDF file. The input data (transaction.csv) is attached too.

 

  • Once you are done, please make sure you do the following
    •   Stop/Terminate the instance (IMPORTANT NOTE: PLEASE STOP/TERMINATE SAP HANA DEVELOPER EDITION INSTANCE IF YOU DON'T USE IT.  OTHERWISE, IT WILL BE CHARGED!)
    •   Understand the fee structure and clean the instance to avoid additional charges (please read carefully about their charges from the service provider you choose)
  • Whenever you encounter any problem, please stop your instance and post questions into this blog.

 

Purpose:

  • Create your own SAP HANA Developer Edition instance and do a simple practice on it.
  • Understand the basic SAP HANA modeling.  More advanced SAP HANA modeling will be expanded in other exercises.
  • Know how to use the developer tools (SAP HANA Studio) and how to create views, execute the views, check the output, write the SQLScript codes, and so on.

 

For AWS instances, you need to know more details before you begin.  The following introduction is just a quick summary.  This blog will be updated and include up-to-date instructions. Please check this blog for more details.

 

You need to know the followings to start SAP HANA developer edition:

  • Have or create an AWS account
  • Have an understanding of the AWS EC2 Management Console
  • Know basic SSH commands or tools to connect a Linux instance (read “Connecting to Linux/UNIX Instances from Windows Using PuTTY”)
  • Know how to perform the following tasks in AWS EC2 Management Console
    • Generate key-pairs
    • Connect to an EC2 instance
    • Stop, start, and terminate EC2 instances
    • Delete EBS volumes
    • Check account activities

 

If you are new to AWS, please understand these steps from AWS. If you are already familiar or a user of AWS, please read SAP HANA developer edition FAQ.   Please check the attached pdf ("Process to Create HANA One V3.pdf"), which includes the detailed instruction to create your own SAP HANA developer instance.  Once you get the $25 dollar coupon, please redeem the coupon at http://aws.amazon.com/awscredits/. Use the time wisely, and enjoy trying SAP HANA developer edition.

In a recent blog posted on the SAP Community Network, independent technical consultant Tom Cenens shares what it takes to achieve SAP HANA certification: Preparing for HANA: How to Achieve SAP Certified Technology Associate Certification.

 

If you plan to attend the SAP TechEd 2012 Madrid conference this month, don’t miss Tom’s 30-minute expert networking session on technology certification for SAP HANA on Tuesday, Nov.13. In this session you can learn which courses are available, which alternative paths are available, learn how you can set up your own hands-on environment, and get tips and tricks to prepare yourself for the certification. For additional session details, see this entry in the SAP TechEd Madrid agenda: SAP HANA Technology Certification - C_HANATEC

 

To learn about SAP HANA course offerings from SAP Education, visit the Training & Certification Shop, where you’ll find these and other classroom, virtual live classroom, e-learning, and certification options:

 

We hope to see you in Madrid!

For Physicians, the everyday process of accessing patient records from multiple clinical systems including EMRs from multiple hospitals is a difficult and time-consuming task. This is especially the case for emergency care providers, when quick access to pertinent records and information can mean life or death for patients. Physicians receive unscheduled calls from patients, the response to which requires physicians to refer to patient info in multiple clinical systems.

 

Corey solves all of these problems by providing physicians with one-touch instant access to patient information from multiple clinical systems including EMRs at multiple hospitals. This information is made available for scheduled patient visits and in real time when patients call into a physician’s office.

 

SAP HANA’s in-memory database and very fast processing in real time enables Corey to provide all the patient information from multiple EMRs and cloud based services. This enables healthcare providers to have instant access to just the relevant patient information, right when they need it! This improves the quality of healthcare and improves the productivity of physicians at the same time.

SAPPHIRE NOW Madrid is just around the corner and for those of you attending this year’s event who want to learn more about BW Powered by SAP HANA, you will have amble opportunity to do so. As you may know, one of the goals at SAPPHIRE NOW from Madrid is to ensure that customers take the stage to tell their story and share their experiences in working with various SAP solutions including BW running on HANA.

With over 180 BW customers who have moved to HANA to supercharge their BW data warehouse and close to 30 of these who have gone live with this solution, it was not a difficult task in finding BW customers who were eager and willing to share their stories in running BW on HANA at SAPPHIRE Madrid.

 

I’ve listed below what I think are the top 4 must-see activities at SAPPHIRE NOW from Madrid for those who want to hear directly from BW customers about their experiences in running HANA as their in-memory database for their BW data warehouses.

 

  1. 30 Min Customer Panel  (SID 12184)

When: Tuesday Nov 13 from 5pm – 5:30pm

Where: Database and Technology Campus – Main Stage

What: Hear directly from 4 BW customers on how HANA’s in-memory database is enhancing their BW data warehouse performance. Learn how SAP HANA  accelerates BW  to enable faster data loads, reduce IT complexity, and simplify administration with reduced data layers.

Customers on the Panel: Aareal Bank, 1&1 Internet, Cimpor, Valliant Group

Hosted by Dr. Stefan Sigg from SAP.

 

  1. 20 Min Theatre Session  (SID 12182)

When: Tuesday Nov 13 from 1:30pm – 2pm

Where: Database and Technology Campus – Main Stage

What: Hear directly from Johannes Rumpf of Aareal Bank about how they leveraged SAP HANA as the in-memory database to supercharege their BW data warehouse and dramatically accelerate information and reports to their large BW end user community.

Hosted by Dan Kearnan  from SAP.

 

  1. 20 Min Demo Theatre Presentation (SID 12188)

When: Tuesday Nov 13 from 3pm – 3:20pm and Thursday Nov 15 from 2pm – 2:20pm

Where: Database and Technology Campus – Demo Theatre Stage

What: In this live demonstration, Heiko Schneider from SAP will present a number of key capabilities and features enabled when running HANA under BW. He will demonstrate the speed at which Data Store Objects can be loaded and activated and how this benefits end users via faster access to information.

Presented  by Heiko Schneider from SAP.

 

  1. 60 Microforum Session (SID 12672)

When: Wed Nov 14 from 2pm – 3pm

Where: Database and Technology Campus – Microforum area

What: Join us for this interactive informal roundtable discussion to hear from Christian Beinhauer  of 1&1 Internet and Matthias Czwikla of Hitachi Data Systems as they discuss the role  BW on HANA played at 1& 1 Internet in optimizing their debt management processes.

 

I welcome you to add these activities to your current agenda using the SAPPHIRE NOW Madrid Agenda Builer tool – you will not be disappointed

It's Thursday again! After a 2 week break we are back on with our weekly Modeler Unplugged Episode: This time, we show you the power of Auto Documentation within the HANA Modeler.

More than often, you would notice that the documentation lacks resulting in changes in the actual Model and after some time, they get so far apart that documents become inaccurate. Re-documenting the latest state of the Models can be tedious. Well, not any more! With a few clicks, you can now generate accurate documentation of the actual Models in the HANA System. Enjoy the video:

 

 

Note: For licensing questions around the usage of this feature, please consult your Account Executive.

Since the release of SAP HANA one of the common questions from SAP customers and Partners was the extent to which the HANA interfaces (SQL, MDX) can be used by custom apps. In the past these interfaces were restricted to SAP BI suite (like BI4.x, Analysis Office) and Microsoft Excel which was the only non-SAP front end.

 

With the successful launch of the SAP HANA start-up program and requests from SAP ecosystem, there was a need to open up these interfaces so that Developers can extend their existing and new applications to run on the SAP HANA Platform.

 

As of Teched Las Vegas 2012, we have opened up the SAP HANA SQL client interfaces (odbc, jdbc) for partners and customers to create custom apps like. Net or Java apps using these SQL interfaces. More details about this will be updated periodically in the attached SAP Note

(For the most current version visit the SAP Marketplace; Please note: Login is required)

 

With this announcement SAP will be supporting the SQL Scripts executed by these customer developed apps but not certifying customer or partner apps. This announcement does not change the requirement for BI and EIM vendors to go through the official SAP Certification to certify their tools on SAP HANA.

News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.

Ask SAP HANA

Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions:

Χ