Home > Blog > Blog > 2013 > August

by Ryan Somers


Imagine a builder without a hammer, a judge without a gavel, or a millennial without a mobile device. It just doesn’t work.


New Picture (5).jpg
Varsity teams at Under Armour CEO Kevin Plank’s alma mater sport his company’s products.

Now think about a disadvantaged athlete competing without the proper gear. Under Armour’s mission -- to make all athletes better through passion, design and the relentless pursuit of innovation -- is as strong as it was the day former University of Maryland football player Kevin Plank created the first piece of performance enhancing apparel in 1996 from his grandma’s basement.


Customers took notice of Plank’s innovation, a better alternative to the sweat-soaked cotton tee. Just 17 years later, Under Armour is a multi-billion dollar global brand and the maker of the most innovative performance footwear, apparel and accessories in the world.


With Great Power Comes Great Responsibility


And with Under Armour’s great success -- total sales were just under US$2 billion last year -- comes great challenges, such as unprecedented demand. Whether its customers are shopping in-store or online, Under Armour needs to its products into their hands as fast as possible.


Managing fluctuating supply and demand changes was a stagnant challenge in the past, and allocation reports were only capable of running at night and over the weekend. This limitation prevented the sales operations team from analyzing and reacting in real-time.


So the products were at risk of remaining in a warehouse, rather than being in the store or at the customer’s doorstep.


The Winning Formula


By utilizing Sales Order Allocation and Replenishment with SAP HANA, Under Armour’s allocation jobs are 80 percent faster, and it is able to optimize critical business scenarios. For Jody Giles, Chief Information Officer at Under Armour, it was an ah-ha moment.


New Picture.jpg
VIDEO: SAP solutions help Under Armour get its products into stores and onto playing fields, instead of being stranded in warehouses.

“Now I can run my business differently,” Giles said. “I can optimize for revenue; I can optimize for fill rate; I can run different scenarios and [find] opportunities that we didn’t have before.”


These newly found opportunities have led to a 1 percent increase in fill rates. This can save Under Armour up to US$10 million per year, and more importantly, drive its innovations out of the package and onto the playing field.


Weekend Warrior or Professional Athlete?


Under Armour has infused performance apparel, footwear and accessories industries with fresh products and perspectives. But Plank’s vision hasn’t fully blossomed just yet.


We haven’t made our defining product yet,” is a slogan on the wall that greets Under Armour employees at the company’s Baltimore headquarters. Something tells me that when the time comes, Under Armour will revolutionize the future of sports through wearable technology.


Take a look at their latest sneak peak providing a glimpse of what’s next: #IWILL. And watch the Under Armour and SAP HANA video.

Syndicated with permission by Author. Originally posted in SAP Business Trends.

by Ryan Somers

If you are one of the roughly 25 million fantasy footballers out there, or if you’ve ever tuned in to The League, you’ll agree that becoming league champion is the ultimate triumph for bragging rights with your friends. But winning the title requires a lot of time and effort -- so much so that fantasy football costs employers upwards of $6.5 billion due to lost worker productivity, according to a 2012 study.

New Picture (3).jpg

Fantasy footballers, you no longer need to irresponsibly cram your brains with countless stats and predictions on company time. SAP has you covered with a cool new cloud-based SAP Player Comparison Tool powered by SAP HANAand visualized by SAP Lumira.

“Fantasy football fans love stats almost as much as watching the action on the field,” said Jonathan Becher, chief marketing officer at SAP. “More than ever, users rely on analytics to gain a competitive edge and the player comparison tool is designed to deliver that advantage through insights that can guide decision-making.”

Kick Off

Friday marked the NFL Fantasy Week Draft Celebration in New York, and as you can imagine, the event was filled with excitement. Mike Morini, SVP and GM for SAP Cloud, kicked the event off by introducing:

Mummery highlighted a few different scenarios with a quick demo of the Player Comparison Tool, which includes 10 years of stored data instantly analyzed as soon as the user pushes the button. Mummery’s scenario: “I need a player to have a big week more than I need consistency.”

SAP Player Comparison Tool
NFL Illustration/NFL.com

After selecting two players, the Seahawks’ Russell Wilson and the Redskins’ Robert Griffin III, the tool analyzes five different verticals:

  • Performance
  • Matchup
  • Consistency
  • Upside
  • Intangibles

For the big week, the user is able to weight Upside higher and Consistency lower, for which the tool recommends RGIII over Wilson.

Backend Zone

All leagues are different, and most fantasy football owners prioritize specific attributes over others. Some may tend to be more conservative and prefer to play it safe with consistency. Others may believe in focusing on matchups, starting offensive players who are playing against poor defenses in hopes of exploiting them.

So the app adjusts to your specific league settings and scoring system, making it 100 percent customizable for end users. No more cookie-cutter ranking of players.

The app even includes weather scenarios in the intangibles category, taking virtually everything into account. And because the SAP Player Comparison Tool is powered by SAP HANA, it will always provide up-to-the-second insight at the speed of thought.

If you’re like me and there just aren’t enough hours in a day, you probably value: speed, insight and ease of use just as much as I do. So I’d recommend logging onto nfl.com/fantasyfootball to take charge of your team and win that piece of hardware.

Connect on Twitter and LinkedIn for the winning edge in sports and technology. And stay tuned for Part 2 of this blog post, featuring NFL predictions, fantasy picks and interviews from the real and fantasy professionals.

Syndicated with permission by author. Originally posted in SAP Business Trends.

HANA Live allows customers to do reporting on transactional data of Business Suite Applications (ERP, CRM, SCM, ...) directly. By using native concepts of HANA, HANA Live provides an easy way to do operative reporting on business data in real time.

A key aspect is also that the models provided by HANA Live can be adopted and extended by customers easily. For illustrating this, I show in a practical example how re-use views of HANA Live can be combined for implementing a customer-specific query. 


Having seen many a presentation and screenshot from lab based examples of SAP HANA Live queries, I thought I’d share something from the ‘real world’. The query itself is pretty straight forward and is part of the standard SHAL delivery (BillingDocumentQuery), so no extensions or changes, just out of the box. However, the information provided by the query has an impact far larger than just a list of billing documents!



Lets explain the volumes first:


VBPA – 1 billion

VBRK - 55 million

VBUK – 80 million

...there are more tables but the data volume is not really significant when compared to the 3 biggest tables.


SHAL use case is 'side-by-side' so HANA is not a primary database in this scenario.


So lets throw a Month End related question at the query, something like ‘give me all the failed Billing documents due to an error in the accounting interface...by billing document number’. This is done with only one filter which is the client, so no sales org, no dates...no key fields!!


The example below was recorded in real-time using manual navigation (on a track-pad ), with a total execution time from login to result of 45 seconds!!

(ignore the credits as this is something that YouTube inserts...)



The result is rather amazing!

  • Average response time is below 8 seconds (something that I understand was a benchmark set by Hasso).
  • The impact on the user interaction is obvious i.e. no background jobs that are forgotten about letting the issue grow till the last days of month end close.
  • Errors can now be identified and resolved well before they cause true issues.
  • Aggregated view across the whole system irrespective of timeframe...who knew there were errors still showing in 2006!!!


I will not get into the real-time KPI/Analytic opportunities provided by this single query, as that is something for another blog, but you can see the possibilities and apply to your own requirements.

The test was done on a ‘pre-prod system’ with volumes representative of production. I hope to have productive usage by next week!

SAP Business Suite on SAP HANA is gaining more and more exposure in the marketplace and if you are like me you really want to know more detail than some marketing slides show. SAP HANA unlocks many opportunities to improve performance and change the way we look at real-time processing compared to historical expectations in performance. So when we combine SAP HANA and the SAP Business Suite there are numerous functional areas where optimization offers tangible performance improvements for the customer. This blog focuses on optimization techniques used within SAP ERP Materials Management and Purchase Order History.


Purchase Order History

Purchase Order History data is used extensively by over 80+ SAP Materials Management transactions. Many of these transactions have dependencies to key business process such as GR/IR clearing, invoicing and purchasing. If these key business processes are not performing it can lead to frustrating issues such as timeouts and a poor user experience. Significant performance gains have been made as demonstrated by the optimization of transaction MB5S for GR/IR Balances, where execution times were improved by a factor of twenty-five.


Given the shared design of Purchase Order History reports performance improvements across multiple transactions was possible. The following transactions were targeted for optimization:

  • ME2N - Purchase Orders by PO Number
  • ME2K - Purchase Orders by Account Assignment
  • ME2L - Purchase Orders by Vendor
  • ME2M - Purchase Orders by Material
  • MIRO - Invoice
  • MIGO - Goods Receipts
  • MB5S - List of GR/IR Balances



Across the four ME2* transactions in particular there were three approaches used to improve performance. These optimizations can be classified as either HANA related or generic, meaning improvements will be evident independent of the underlying database. 


  • Stored procedures created in the HANA repository were called using ADBC (ABAP Database Connectivity)
  • External views (ABAP) were created to as a proxy to call HANA assets seen below.
  • Calculation views were created and aggregate Purchase Order history values and call the attribute view listed below.
  • Attribute views were created to represent the EKB* tables (Purchase Documents history).



  • Program sequencing was changed
  • Buffer usage was improved to retrieve Purchase Order Document Item information early and remove subsequent calls to the database.
  • Bulk reads of Purchase Oder document information we read compared to individual records being read.
  • Asynchronous database calls have also been implemented to remove blocks in processing.
  • Authentication Checks - These were leveraged to determine what Purchase Order data a user is allowed to view and then retrieve the data from the database reducing the amount of information sent to the application layer.
  • SQL statements were updated to embrace Open SQL improvements and increase performance by a more intelligent filter criteria.  E.g SUM.

Patterns Used

To improve performance on the above reports and transactions specific optimization patterns were implemented. These patterns or techniques can be leveraged by all customers with SAP HANA in addition to some optimizations that can be run on any database. 

  • Code Pushdown (HANA Assets)
    • External Views
    • Attribute Views
    • Aggregation
    • Stored Procedure
  • Open SQL
    • Refinement of existing SQL statements to leverage increased capability of Open SQL.

Optimization Activation

As with all other SAP Business Suite for HANA enhancements a non-disruptive methodology is followed to ensure stability for all customers independent of their underlying database. The Open SQL optimizations mentioned above cover all databases and hence no isolation is required. The use of HANA specific optimizations as seen above are activated using the switch framework in conjunction with commonly used BADI's.


Purchase Order History optimizations represent important improvements in performance across multiple transactions within purchasing. We shall cover other areas of optimization in subsequent blogs and we encourage general discussion as required.

by Boris Gelman


I noticed that IBM has recently submitted results in the SAP NetWeaver BW Enhanced Mixed Load (EML) benchmark. This benchmark was created by the SAP Performance Benchmarking team almost two years ago to more accurately reflect how customers are using SAP BW today. As a result, the benchmark includes ad-hoc queries on detailed data (DSO’s) as well as cubes, and simultaneous data loading at a rate of 10 delta loads every five minutes throughout the testing. In short, the BW EML benchmark comes much closer than past BW benchmarks to representing real-world BW scenarios.


The first BW EML result came in almost a year and a half ago, submitted by HP on an HP AppSystem for SAP HANA, and I think it’s worth taking a few minutes to compare these results. I worked very closely with the HP team and am familiar with the configuration, as well as the execution of the benchmark. I have been less familiar with the recently submitted results from IBM on DB2 for iSeries, but I will take a few minutes to make some points that jump out at me.


Comparison #1: Data Volume


The HP results tested a BW scenario with 1 Billion rows of data, whereas the IBM results submitted utilized only 500 Million rows – half the data. As any BW administrator knows, data volume is probably the single largest factor in query and load performance. The more data a query has to navigate, the more rows must be stored in secondary indexes and materialized query tables, the longer maintenance operations take, the longer loads must run, the more performance is potentially impacted in an adverse manner.


The HP on HANA result had twice the data but does not require secondary indexes, materialized query tables, aggregates, or any such tuning mechanisms to achieve fast results. This leads to a simpler, less maintenance-intensive configuration, while still maintaining query and load performance.


To analyze the two performance results on more equal footing, one method is to apply what is commonly known as data size scaling factors. Given we have 2 varying dataset sizes, one with 500 million records in IBM system and another with 1 billion records in HP system, we can apply scaling factor of 5 to the IBM results and the scaling factor of 10 to the HP results to help normalize the performance comparison by varying size:


  • HP – 65,990 x 10 = 659,990 navigation steps per hour
  • IBM – 66,900 x 5 = 334,500 navigation steps per hour


This shows that the HP result is actually two times better when data size is taken into consideration.


Comparison #2: CPU Performance


As someone that has worked with IBM servers in the past, I can vouch for their performance. So it was no surprise to me to see that, according to most measures of performance, be it SAPS ratings or LINPACK or SPEC benchmarks for example, the recently-released IBM Power7+ 4.06GHz has advantages over the Intel Xeon E7-4870 (Westmere EX), which has been out for almost three years now.


Not only does the Power7+ run at a higher clockspeed than the Xeon (4.06GHz vs 2.4GHz), have 10MB of L3 cache per core vs the Xeon¹s 3MB of L3 cache per core (30MB per socket), but it is also capable of four threads per core vs the Xeon’s two threads per core, a fact which allows for much more throughput for enterprise applications. Calculations can vary dramatically based on workload and function, but I have seen comparisons claiming the Power7+ 4.06 is from two to four times faster than the Xeon E7-4870 2.4.


Comparison #3: Two-Tier vs. Three-Tier


As every Basis Administrator, DBA or System Administrator knows, combining an application server onto the same physical hardware as it’s database server allows for a dramatic reduction in networking overhead between application and database. This is why the vast majority of SAP SD benchmarks submitted are two-tier, not three tier. (In fact, IBM has not submitted any three-tier SD benchmarks on their flagship Power platform since 2008)


The fact that the IBM benchmark result was a two-tier result, and the HP result was a three-tier result is a significant one that would make a meaningful impact on performance, and should not be overlooked.




As we know, the benchmarks only become valuable when multiple vendors participate with multiple platforms. I still believe that to be the case, and I am glad that IBM stepped up to the plate with these results.


I look forward to more results, running on both HANA and other database platforms, in the near future.

by Nicole O'Malley

Much has changed with HANA One and we can’t wait to tell you how. Join us on September 10th,  as we return from our summer vacation with a great series of webinars meant to inform you of all the new innovative ways our customers are working with HANA One. Register for one or all of these webinars to keep up to speed with how far HANA One has come since it launched a year ago.


Imagine finding funds for a project that were thought to be unavailable. We begin our series with a customer story about how to unlock hidden resources in the business so that programs that perhaps are on hold because of lack of funding now can launch. Next, we discuss how customers can build applications on HANA One accessing open source tools like Hadoop to build applications for productive use right on HANA One. We finish September with a fun application that was created on HANA One to help audiences choose the movies they want to see and learn about what everyone else is seeing.



September 10, 2013 – 8:00am PDT / 11:00am EDT

Customer Story: What’s the SCOOP? Seeking Cash Opps in Operational Processes

Register here


September 24, 2013 – 11:00am PDT / 2:00pm EDT

Combine Hadoop and SAP HANA One to Create Applications in the Cloud

Register here


September 25, 2013 – 2:00pm PDT / 5:00pm EDT

HANA One Puts Sentiment Analysis of Movies in the Palm of Your Hand

Register here


Stay tuned for more topics in October and November.

Newly released SAP HANA One Rev 52.1 (Start an Instance on AWS), takes another big step further enhancing user experiences providing SAP HANA One “Add-on manager” to upgrade an existing HANA One instance (instances launched with HANA One 52.1) to newer versions of HANA as they become available. In addition, HANA One Rev 52.1 will include new content and licensed management.


HANA One Rev 52.1 allows customers to choose when to install the newer HANA versions to their existing instances and try out HANA One samplers or even uninstall the samplers if they so wish. Once you launch a HANA One instance with HANA One Rev 52.1, you can stay as long as you want, upgrading constantly to the latest HANA, and extending your license all without launching any new instances. The new samplers will allow you to try and experience powerful, innovative HANA capabilities. HANA One Rev 52.1 continues to achieve the same goal of perpetually enhancing end user experiences as outlined in our previous release (See Rev 52 Blog).

SAP HANA One Rev 52.1 is released with SAP HANA SPS6 (Rev 62), see the What’s New –SPS6 Release Notes


Major HANA One Rev 52.1 Features

  • New functionality, HANA One Add-on Manager in SAP HANA One Management Console to upgrade HANA version and install or uninstall contents (Example - HANA One Samplers)
  • SAP HANA Rev 56 (1.0 SPS5) install package via Add-on Manager and SAP HANA One IDE Lite for HANA Rev 56
  • SAP HANA Rev 62 (1.0 SPS6) install package via Add-on Manager


HANA SPS6 Major Features

SAP HANA Smart Data Access

  • Real-time virtualized data access to: SAP HANA, Hadoop (Hive), and Teradata
  • Highly optimized query access to enterprise, cloud, and big data sources

Text analysis

  • New and enhanced text analysis language support: voice of the customer / sentiment extraction for Simplified Chinese; core entity extraction for Dutch andPortuguese
  • Natural language processing support for 31 languages, core entity extraction support for 13 languages, and voice of the customer support for 5

Spatial processing with SAP HANA

  • New spatial data types and functions
  • High-performing and optimized platform for spatial processing

SAP HANA extended application services (XS)

  • Faster, open, and flexible application development and deployment environment
  • Reduced TCO and development due to minimized “layers” required to deploy applications
  • Enhanced architecture with  expanded OData support, authentication, and connectivity options as well as IDE based browsers

Application function library (AFL)

  • New predictive analysis library (PAL) functions: DBSCAN, Naïve Bayes, and link prediction (over 27 total PAL algorithms)
  • Enhanced PAL functions
  • Enhanced interoperability with the ability to consume models with PAL PMML import support

We strongly recommend that all our existing HANA One customers move to HANA One Rev 52.1 to take advantage of HANA One Add-on Manager to upgrade their HANA instances and extend licenses.

For all our HANA One Rev 38 customers, our message to you is to migrate to HANA One Rev 52.1 as HANA One Rev 38 license expires in October 2013. Once this expires, there is no way you can use Rev 38 systems. To migrate from Rev 38 or Rev 48 to Rev 52.1, please follow ‘Upgrading to SAP HANA One Revision 52’ as documented in Understanding HANA One

Recommended Migration Path


  1. HANA One Rev 8/Rev 48/Rev 52 instance -> HANA One Rev 52.1 (with HANA Rev 52) instance
  2. Within the same HANA One Rev 52.1 (with HANA Rev 52) instance -> HANA One Rev 52.1 (with HANA Rev 56) -> HANA One Rev 52.1 (with HANA Rev 62 or higher version as available under your add-on manager)


  2. Within the same HANA One Rev 52.1 (with HANA Rev 52) instance  ->  HANA One Rev 52.1 (with HANA Rev 62 or higher version as available under your add-on manager)

On August 13th, SAP HANA Live, SAP’s strategic solution for operational real-time reporting on HANA, has passed the final quality gate and is now generally available for the market.

You might ask yourself “An operational reporting solution called SAP HANA Live? Wasn't there a different name for it?”


Yes, in fact, the name used to be “SAP HANA Analytics Foundation” when I blogged about it last time.

What I wrote back then is still true, so I do not repeat it all here. You just have to replace the term “SAP HANA Analytics Foundation” with “SAP HANA Live” (and we are all glad we finally found a rather short name for it).


Since Q1/2013, we continued, as promised, to expand the scope of our virtual data models. In ERP Logistics we added virtual data models for the Project System, in CRM we are now also delivering models for the Interaction Center, and for insurance companies we now offer models for claims and policy management.


All-in-all SAP HANA Live is now delivering 2000+ HANA virtual data model views and we will continue to grow this number.

Thus, the value of SAP HANA Live for our customers is increasing again as even more ad-hoc business questions can be answered based on operational real-time data and implementation stays as easy as it was:

Just deploy HANA Live on your HANA system, launch one of the BI clients we recommend and pick from one of the predefined query views (if you run a side-by-side approach, of course you also need to set up the replication using the SAP Landscape Transformation Replication Server).


The list of BI clients we recommend for a usage with SAP HANA Live got a new member: SAP Lumira.

The others, which were lready recommended last time, are: SAP BusinessObjects Dashboards, SAP BusinessObjects Explorer, SAP BusinessObjects Analysis, edition for Microsoft Office and SAP Crystal Reports Enterprise.


All of those clients can easily access real-time data through SAP HANA Live views and support customers in their need for pixel-perfect reporting (Crystal), explorative analysis (Lumira, Explorer) office integration (Analysis for Office) and creating  Dashboards.


We also worked on the applications on top of HANA Live: “SAP Invoice and Goods Receipt Reconciliation” received a make-over on the UI and we added lots of features our ramp-up customers asked for. The analytical capabilities of the “SAP Supply Chain Info Center” were significantly improved. The same is true for SAP Access Control Role Analytics, while the colleagues working on “SAP Working Capital Analytics, DSO Scope” (WCA) focused on developing a framework for the cool user interaction that will allow us to develop additional KPI-based applications with the same look&feel as WCA in shorter development cycles in the future.


Besides the development work, we also had lots of talks with customers. It was interesting to see how customers confirmed our message that the combination of SAP HANA Live, SAP BW on HANA and SAP BI is perfectly covering all analytics uses cases. This is made easy by a two-way integration between SAP HANA Live and SAP BW: From a SAP BW InfoProvider, SAP HANA Live views can be generated and consumed with BI clients (already available with HANA SP5 / SAP BW on HANA 7.30 SP8).  And: HANA Live views can be consumed by SAP BW as data sources (available with HANA SP5 / SAP BW 7.30).


As usual, we are providing additional information at the known places:


Ramp-Up Knowledge Transfer (update available by end of August)

Robert Klopp

HANA, Columns, and OLTP

Posted by Robert Klopp Aug 21, 2013

A recent post here nicely summarized the architectural advantages of column store for analytics and described the trade-off inherent in choosing a column-orientation. In short, there is overhead associated with taking a row as input and breaking it into columns while retaining the information required to recompose the data as rows at any time. This overhead makes a row orientation significantly more effective for OLTP.


The author used HANA as the model for an in-memory column store and the product his company has developed as the model for an in-memory row-store.


But there are two concepts that were missed, I think.


First, HANA supports a columnar table type that supports the insert, update, and delete of rows as rows. This provides all of the architectural advantages of an in-memory row store for OLTP workloads. The hybrid notion means that after a row is transacted into the transient queue it moves to a columnar format in the background where it is available for analytics... and once moved the transient row is removed. If you run an analytic query against a table it will process against a snapshot using MVCC that exposes the majority of the data as columns and then picks up whatever records were still in the transient queue when the query started.


Note that this means that the compromise with the hybrid table is on the analytic side. Analytics gets 99% of the benefit of a column store (assuming that 99% of the data has been converted to column-orientation) and there is no compromise on the OLTP side. In the case where a mixed workload; analytics and OLTP are pounding the table we might expect that the analytics side might suffer more as more data will still be in a row orientation... and transactions will interfere some with the ability of HANA to fully optimize the use of processor caches for columnar analytics... but the transactions should fly.


This leads to my second point. For lots of good reasons HANA, Oracle, DB2, SQL Server and ASE are general purpose relational database systems. They solve for a wide range of query types and are not tightly tuned for just OLTP. It is a fact that you can build extremely powerful OLTP-tuned systems. They have been around for ages: Tandem Non-Stop and IBM TPF come to mind. But if you implement these then you have to replicate the data to satisfy operational reporting requirements and this introduces significant latency into your business processes.


Oracle, DB2, SQL Server and ASE effectively satisfy some operational reporting requirements, plus OLTP, against a single table instance. But the existence of separate data warehouse systems and operational data stores (ODS) are evidence that they cannot satisfy much without a replica and without significant latency. HANA changes the game here with its hybrid tables.


The author mentioned above might have done better to point out that his OLTP system is much more effective than the older OLTP players. He might point out that there is a spot for a high-volume OLTP product that does not carry the overhead of a general-purpose query engine. But the real question is: if you add the TCO of a specialized high-performance OLTP database plus the TCO of an ODS plus the TCO of a specialized data warehouse infrastructure; how will that compare to the TCO of one instance of HANA for OLTP and ODS and hot data DW plus something like Hadoop for colder data... with the HANA Smart Data Access facility making it all seem as one?

Hi everybody,


Well autumn season is almost here once more and that means we are planning all of the details around our 2nd Annual  SAP TechEd Technology Executive Summit.  This will run once again at The Venetian Hotel in the same location as SAP TechEd.  Many of you who attended last year told us you loved the program, so we took your comments in order to help guide the topics for this year.


I’m happy to say that we have confirmed Vishal Sikka @VSIKKA as well as other notable executives. This is a big deal, and in addition to Vishal’s opening discussion on Monday afternoon (Oct 21), we have a packed program featuring SAP HANA, BIG DATA, MOBILITY, USER EXPERIENCE and other hot technology topics.  This program is also integrated to the TechEd Keynote (Oct 22) as well as the show floor showcase area.


Don’t miss the highly acclaimed “Unplugged!” sessions where you get to share your feedback and personal experiences in a small group setting along with other SAP customers, and a few SAP subject matter experts.  No selling. No marketing.  NO SLIDES!  Just pure unadulterated content.  The topics you want.


Also, if you are thinking, “How can I come to TechEd Vegas without paying?”  This is a little secret:  OUR PROGRAM.    It’s 100% free.  No SAP TechEd badge needed (but you may only attend our Summit for the 2 days– but it’s great stuff...)* *limited slots so you must register immediately.


ROLES:  CIO, CTO, Enterprise Architect, VP of IT, IT Director, any IT or technology management.   You may register below, and your role will be checked to ensure you are qualified to attend this Summit.


See you there!


Save the Date: October 21–22, 2013
2nd Annual SAP Technology Executive Summit at SAP TechEd Las Vegas

Mark your calendar for the 2nd Annual SAP Technology Executive Summit at SAP TechEd Las Vegas. This invitation-only program will bring together an elite group of customers, SAP senior leadership, and special guest speakers for an insightful and interactive program integrated into the first two days of SAP TechEd.

By attending, you will . . .

Learn how SAP will orchestrate and deliver on its vision – the full potential of a real-time database platform, SAP HANA, cloud, mobility, and user experience design – all combined with SAP applications and platforms.

Gain valuable insight from SAP Unplugged! – open discussions where you can share your feedback in an informal, candid setting.

Network with other customers in a way that you will all walk away with some amazing new ideas and concepts.

Agenda Topics Include:

• SAP Technology Update and Roadmap Outlook
• SAP and Big Data
• SAP Mobile Update
• How SAP runs SAP on HANA
• SAP Technology Showcase Guided Tour
• Developer Nirvana
• SAP HANA Marketplace
• SAP Unplugged!
• All SAP TechEd Keynotes on Monday and Tuesday


Event Details:

The SAP Technology Executive Summit will kick off at 1:30 p.m. on October 21st with a special Keynote from Dr. Vishal Sikka, Executive Board Member, Products and Innovation, SAP AG.  This will include the SAP TechEd opening Keynote and a special Evening Event. The program concludes on October 22nd at 6:00 p.m.

Location: Venetian Hotel, Las Vegas, NV

REGISTER:  CLICK HERE     Send request to attend TechEd Executive Summit including your full contact information (name, title, company, phone, email) to Stephanie Williams.

Due to the intimate nature of our event, we can welcome only a limited number of guests. Space is limited and will be filled in the order of the responses received.

Additional event details will be sent out in September.

We hope to see you in October!



*Note customers do not have to attend SAP TechEd to attend the Summit.


by Marie Alami, SAP

Nanpu Bridge and traffic on highway, dusk

Join Steve Lucas, president of platform solutions at SAP, and Boyd Davis, vice president and general manager of the Datacenter Software Division at Intel, for a discussion on Big Data and the promise it holds to re-invent your business.


Interact with the best and brightest Big Data experts from SAP and Intel to learn how to leverage your volume and variety of data for competitive advantage in your business. This is an exclusive event for senior executives pursuing a Big Data strategy.

SAP and Intel
2013 Forum on Big Data
August 27, 2013
10:00 a.m. – 2:30 p.m.


Intel Corporation
2200 Mission College Blvd
Santa Clara, CA

Register today ›


Discussion topics will include:

  • The “Art of the Possible” with a Big Data platform that acquires, accelerates, analyzes, and predicts to deliver the right insights from your data
  • Changing the future with SAP HANA and Hadoop
  • How to deliver value to business organizations by deploying new solutions such as sentiment analysis, demand signal management, predictive maintenance, and fraud management
  • The golden ticket: why in-memory technologies are the secret sauce to tackling your organization’s Big Data challenges

You’ll also have the opportunity to:

  • View innovative demos and interact with Big Data experts while you enjoy a tailgate lunch
  • Engage in one-on-one meetings with experts from SAP


To maximize the value of your interactions with speakers, seating is limited. Register now to reserve your space.



  Steve Lucas

  President, Platform Solutions




   Boyd Davis

   Vice President and General Manager, Datacenter Software Division



   Manav Misra

   Chief Science and Knowledge Officer




   Dan Morales

   VP Corporate Enablement Functions



This post originally published on SAP Analytics blog and republished with permission.

Mining social media data for customer feedback is perhaps one of the greatest untapped opportunities for customer analysis in many organizations today.  Social media data is freely available and allows organizations to personally identify and interact directly with customers to resolve any potential dissatisfaction.  In today’s blog post, I’ll discuss using SAP Data Services, SAP HANA, and SAP Predictive Analysis to collect, process, visualize, and analyze social media data related to the recent social media phenomenon Sharknado.



Collecting Social Media Data with SAP Data Services

While I’ll be focusing primarily on the analysis of social media data in this blog post, social media data can be collected from any source with an open API by using Python scripting within a User-Defined Transform.  In this example, I’ve collected Twitter data using the basic outline provided by SAP in the Data Services Text Data Processing Blueprints available on the SAP Community Network, updated it for the REST version 1.1 Twitter API.  This process consists of 2 dataflows, the first tracks search terms and constructs (Get_Search_Tasks transform) and executes (Search_Twitter transform) a Twitter search query to store the data pictured below. In addition to the raw text of the tweet, some metadata is available, including user name, time, and location information (if the user has made it publicly available).


Once the raw tweet data has been collected, I can use either the Text Data Processing transform in SAP Data Services or the Voice of Customer text analysis process in SAP HANA. While both processes give the same result, SAP Data Services is also able to perform preliminary summarization and transformations on the parsed data within the same dataflow.  In this case, I will run text analysis in SAP HANA by running the command below in SAP HANA Studio.

Create FullText Index "VOC" On <table name>(<tweet text column name>)




This results in a table called $TA_VOC in the same schema as the source table, as shown below.


In this table, the TA_TOKEN—called SOURCE_FORM in SAP Data Services TDP—is the extracted entity or element from the tweet (for example, an identifiable person, place, topic, organization, or sentiment), TA_TYPE (called TYPE in SAP Data Services TDP) is the category the entity falls under.  These are the two main text analysis elements used to extract information from Twitter data.

The same VOC text analysis can be performed using the Text Data Processing transform in SAP Data Services. To invoke the full set of VOC dictionaries included in the SAP HANA VOC text analysis described above, include the following custom dictionaries and rules in the Text Data Processing transform’s Options tab:


Details on the above settings and information on accessing rules and dictionaries for other languages are available in SAP’s Text Data Processing Language Reference Guides.

For a more in-depth explanation on Text Data Processing and social media analysis using SAP Data Services, refer to the Decision First Summer EIM Expert Series webinar on Twitter data collection and social media sentiment analysis by Nicholas Hohman.

Once the Twitter data was loaded into SAP HANA and text analysis had been performed, I created an Analytic View and several Calculation Views to allow for visualization and analysis.


In the first Analytical View pictured above, I’ve cleaned up the TYPE categories a bit further to consolidate into top level categories (for example, combining all types of Organizations into one single Organization category) and assigned a numeric sentiment values to each sentiment-type entity as shown in the table below, ranging from 0 (strong negative sentiment) to 1 (strong positive sentiment).


I then created a calculation view that aggregates data to the tweet-level and calculates tweet-level flags for analysis, including flags to indicate whether key types of entities are found in each tweet (location, topic, Twitter hashtag, retweet, sentiment, etc).  This also aggregates the average sentiment based on any sentiments found within the tweet. I’ll use these aggregated metrics later for visualization and predictive analysis of the Twitter data.


The final output of the SAP HANA Information Views are 2 analysis sets:

  1. ) A tweet-level analysis set with aggregated flags and values summarizing the tweet, including tweet length, number of extracted entities within the tweet, and the metadata collected with the tweet, such as location, time, and the user information.
  2. ) An entity-level analysis set with tweet-level metadata joined back to the individual entities to allow analysis at the entity level.

While these analysis sets could also be created using a SAP Data Services ETL process, the SAP HANA Information Views have the advantage of being calculated on the fly rather than as a batch process, so if we are continuously monitoring and collecting Twitter data, users will have real-time access to social media trends and insights without having to wait for an overnight or batch process to finish.


Visualization and Analysis of #Sharknado Data

For this analysis, I collected over 33,000 tweets related to the topic “sharknado” over a period of days. After Text Analysis was performed, over 200,000 individual entities were extracted from these tweets.  A natural first step is generating descriptive charts to explain the nature of these extracted entities and tweets.  The figure below shows an area chart of all the entities extracted from the tweets by category.  Twitter hashtags were the most commonly identified entities, followed by sentiments, Twitter users, topics, and organizations.  The depth of color indicates the tweet-level average sentiment. This shows that tweets with topic entities have the highest (most positive) overall sentiment, while tweets with hashtags are much less positive.


A few other fast facts on the Sharknado tweets:

  • 38% of the tweets collected include a retweet from another user
  • 41% of tweets have a topic entity extracted from the text
  • 7.5% of tweets have a location entity within the tweet text
  • 45% of tweets have a sentiment entity identified in the text
  • 54.5% of tweets have 5 or more entities extracted from the text
  • The chart below shows a histogram of tweets by the length of the tweet text—tweets are most commonly right around the 140 character limit, with about 25% of tweets at 135 characters and above.


Now, we can start to examine the individual entities extracted from the tweets and sentiments associated with each entity.  For example, we can pull the Person entities identified by the text analysis in a word cloud, shown below.  This word cloud shows the most common entities (larger size) and the sentiment associated with the person entities (depth of color).


This shows that Tara Reid, Cary Grant, Tatiana Maslany, Ian Ziering, and Steve Sanders were the most commonly identified person entities, with Tatiana Maslany and Tara Reid appearing in tweets with higher average sentiments.  Tara Reid and Ian Ziering are actors that appeared in Sharknado, and Steve Sanders was Ian Ziering’s character in Beverly Hills, 90210, but I was confused by the appearance of Cary Grant, whom Wikipedia identifies as an English actor with “debonair demeanor” who died in 1986, and Tatiana Maslany, a lesser-known Canadian Actress, neither of whom appeared in Sharknado.  Further filtering the tweet text for these particular entities, I find an extremely high retweet frequency for 2 influential tweets:


@TVMcGee: #Sharknado is even more impressive when you realize Tatiana Maslany played all the different sharks.

@RichardDreyfuss: People don't talk about it much in Hollywood (omertà and everything) but Cary Grant actually died in a #sharknado

The entity “impressive” was strongly positive for Tatiana Maslany, while “n’t talk” (negatory + talk) was considered a minor problem for the Cary Grant tweet.  Further analysis can be done to identify popular characters and portions of the movie, which the Sharknado filmmakers can mine to identify the characters, plots, or topics to revisit in the already-approved sequel to Sharknado (coming Summer 2014).

Similarly, investigating location entities shown in the word cloud below, we can see the most common references are to Texas and Hollywood, with tweets about Texas being more positive than Hollywood.



Organizations identified by Text Analysis show SyFy (the channel that brought you Sharknado) and the phrase Public Service Announcement, as well as Lego and Nova were common in tweets, as shown in the word cloud below.


The SyFy and public service announcement phrases were found in a frequently retweeted tweet about a re-airing of the movie:


@Syfy: Public Service Announcement: #Sharknado will be rebroadcast on Thurs, July 18, at 7pm. Please retweet this important information.

Nova was a character in the movie who may have met an untimely end, which apparently did not elicit positive sentiments.  The Lego topic/organization was also in a commonly re-tweeted tweet of a picture of a sharknado made of Legos.


@Syfy: OMG OMG OMG someone made #Sharknado out of LEGOs!!! http://t.co/0ORVv6w2uf http://t.co/lbjJ6DDvzU

Predictive Analysis on #Sharknado Data

After summarizing and visualizing the data, I can leverage SAP Predictive Analysis’s Predict pane to evaluate the models using predictive algorithms.  We can further summarize tweet data across multiple numeric characteristics using a clustering algorithm.  Clustering is an unsupervised learning algorithm and one of the most popular segmentation methods; it creates groups of similar observations based on numeric characteristics.  In this case, the numeric characteristics available are: length of tweet, # of entities extracted from the tweet, and the presence of a topic or a sentiment flag.  While binary variables are not technically appropriate to use in a clustering model, we’re including them here to increase the complexity of our model and make the results more interesting.

The clustering model results show 3 groups of tweets, roughly separated by size, with Cluster 3 being the short tweets, Cluster 1 the longer tweets, and Cluster 2 between 3 and 1. This clustering model does show us that longer tweets were more likely to have more entities identified by the text analysis and were more likely to have a sentiment and a topic within the tweet.


While this is an extremely simple example, with additional descriptive statistics we could cluster tweets according to sentiment and occurrences of key phrases or words; if the organization could link these tweet segments to customer satisfaction or other key metrics (such as referrals generated through social media buzz or calls to a customer service center), monitoring the frequency of tweets by segment would be a great, nearly real-time leading indicator of viral buzz, customer complaints, or referral business.

Another potential application for predictive models would be attempting to estimate the impact of tweet characteristic on the sentiment value of the tweet.  In this case, I’ve arbitrarily determined that a tweet with an average sentiment of 0.4 or higher is “Positive”.  I can then use the R-CNR Decision Tree algorithm or a custom R function for Logistic Regression (see this previous blog on Custom R Modules) to predict which elements are most indicative of positive tweets.  In order to compare these models, I use a filter transform to filter out tweets without sentiments.  Then, I configure the Logistic Regression and R-CNR Tree modules to include all my descriptive data, including tweet length, number of entities extracted, and presence of location and topic entities.


Once this predictive workflow has been run, I can review results for the logistic regression and decision tree results.

Logistic Regression results

These model output charts show that the logistic regression model is not terribly predictive, showing an AUC (area under the ROC Curve) of only 0.598 (AUC varies from 0 to 1 with a baseline of 0.5 and values closest to 1 indicating more accurate predictions).


This chart shows that there is a slight increase in predicted average sentiment (red line) across the actual average tweet sentiment (x axis).  Blue bars represent tweet volume for each level of average sentiment.  Ideally, the red line would be approximately diagonal from bottom left to top right.


Decision Tree results

The Decision tree shows that the model is able to identify large pockets of tweets that are much more likely to be positive.


In summary, the models show potential to distinguish tweet positivity based on tweet content characteristics.  These models could be further tuned for accuracy with more Sharknado-related characteristics, such as whether the tweet mentioned specific plot points, emotions, or characters.  In these preliminary models, results suggest that having a location entity, longer tweet length, and presence of a retweet contribute to positive sentiments.  Perhaps this suggests that people are more likely to retweet positive tweets than negative?

Adding presence of key terms like “chainsaw” or “shark” or specific character names could be used as input predictors and we would be able to see the impact of those specific terms on sentiment positivity.  Developers of the Sharknado sequel, could determine which specific aspects of the film were most positively and negatively received by the audience and incorporate these concepts into the sequel.

Tips for Social Media Data Collection and Analysis

Based on this experiment, I have a few recommendations for approaching a similar problem going forward.

  • Implement custom data dictionaries and custom categorizations: Using custom data dictionaries, we could have the text data processing step immediately identify key terms that are related to our particular topic.  In this case, we could have created a custom dictionary with character names, plot points, or key terms like “chainsaw” or “shark”.  These terms might not be recognized by the “standard” text analysis dictionaries, but they will help us automatically pull out and identify entities that are important in our particular scenario.
  • Scrape profanity and irrelevant tweets immediately: One thing I noticed when pulling in Sharknado-related tweets was an abundance of profanity and Twitter spam. Scraping out profanity is important if the tweet data is going to be included in Business Intelligence reports or shared with others within the organization.  Profanity is identified by the Ambiguous and Unambiguous Profanity dictionaries (for more information on Voice of Customer Text Analysis, see SAP’s Text Data Processing Language Reference Guides), so the organization can set internal censorship rules based on the identified profanities.  Similarly, setting up policies to eliminate or avoid spam-related Twitter accounts may help keep the feedback data more pure.  I noticed accounts that would tweet a message like “Get 500 followers free” and include the top 5 hashtags trending on Twitter at the time.  These tweets made up a huge portion of the data I collected, and should have been immediately discarded based on the repetitive text so as not to influence frequency and sentiment analysis.
  • Construct descriptive attributes: Probably the most important part of this process is constructing descriptive attributes for each of the tweets.  These may include flags to indicate whether the tweet included a key entity or category, length fields, or perhaps user information that can be collected about the poster.  These attributes might be related to the custom data dictionaries relevant to the topic.
  • Identify and treat retweets differently: While the re-tweeted data is valuable in gauging influence and frequency of the social media buzz, it can bias the sentiment analysis by overwhelming the average sentiment with copies of the same information.  Therefore, flagging tweets that contain retweeted information and excluding those from some sentiment analysis might eliminate sentiment bias of a single opinion or phrase that was retweeted many, many times.
  • Analyze “request” tweets: Although the data for Sharknado did not yield much usable “request” data,  this data may be a valuable source of information to customer-centric organizations and in analyzing customer opinion feedback.  For example, the following tweet was identified as a “request” and includes product enhancement suggestions from a fan:


I think #SharkNado should be a trilogy. Next would be SharkQuake followed by, what is sure to be a hit, SharkNami.

Implementation of Sentiment Analysis Data

While the Sharknado example is a fun pop culture phenomenon, how does this become relevant to a real-world organization?  Collecting Twitter data relevant to an organization could provide nearly free focus group-like feedback directly from customers who are most likely to influence their peers.  For example, a hotel chain could collect Twitter data not only from users that mention its brand name, but also from users mentioning competitors’ names or just talking about hotels in the general sense.  They can get an idea of what contributes to positive and negative sentiments about hotels.  Do negative sentiments most commonly accompany comments about cleanliness? Noise?  Wait to check in?  Staff? Do positive sentiments stem from amenities like the pool or gym?  What is the general sentiment for customers of your hotel chain versus competitors? And are there particularly negative sentiments for users of one particular location that might indicate a serious problem?

Furthermore, having this type of feedback available in a nearly real-time environment allows organizations to monitor, respond to, and leverage social media buzz to increase audience or revenue for the organization.  For example, when SyFy executives saw the volume of social media posts and response to the initial Sharknado airing, SyFy was able to quickly schedule subsequent showings, commit to a sequel, and arrange for the film to make its theatrical debut in response and disperse this information via Twitter while the topic was still trending. This equates to increasing awareness and future audience at a very low cost.  If the SyFy had missed this window, they would have to expend significant marketing funds to re-generate this level of buzz.  In fact, by leveraging this strong social media buzz around the initial airing of Sharknado, SyFy actually garnered higher viewership with the re-airing than they experience during the initial premier.

This type of feedback can give insight not only to what users might think about your organization’s brand overall, but also could give an idea of the importance that specific product aspects hold in a user’s experience.  Understanding how the consumer values these factors could guide investment decisions or marketing strategies by highlighting the features that customers care about and those that are not meaningful.

This post originally published on SAP BI Blog and republished with permission.

I am excited to share an amazing and rare opportunity to learn about In-Memory Data Management from none-other-than, SAP co-founder, Hasso Plattner. Starting on August 26, 2013, Hasso will personally host and teach the In-Memory Data Management course, held over a six-week period. This dynamic course will dive deep into the technical understanding and management of enterprise data, as well as explaining the basic concepts and design principles associated with it.

Don’t miss out on this great opportunity. Sign up now and be sure to mark your calendars for August 26, 2013.  

Here’s a brief summary of the course:

The online course focuses on the latest hardware and software trends that have led to the development of a new revolutionary technology which enables flexible and lightning-fast analysis of massive amounts of enterprise data. Beyond that, the implications of the underlying design principles for future enterprise applications and their development are discussed. Unbelievable things are possible and you will understand why this is true using an in-memory column-oriented database instead of a traditional row-oriented disk-based one.


Start: August 26th, 2013

Duration: 6 Weeks

Course language: English

Where: Register with openHPI (your openSAP logon credentials will not work with this platform).


Below is a little more on the overview of the course content:

  • The future of enterprise computing
  • Basic and advanced database storage techniques
  • In-memory database operators
  • A new enterprise application development era


Wouldn’t it be a great opportunity to learn from one of the co-founders of SAP? It’s not too late to register, so sign up now!

Do you know how many species (other than humans) there are in the world?  Trick question.  We actually don’t know the answer.  Estimates vary, but some think this answer could lie anywhere between a few million to 100 million species.  So why should we care? 

Monitoring biodiversity has many useful applications.  What if I told you that there is a species of mosquito called the Asian Tiger Mosquito that is invasive, has high biting potential, and is known to carry disease[i].  This mosquito is also expected to significantly expand its territory within the northeastern United States if temperatures continue to increase as predicted[ii].  Before you write this off as another sensationalist news bite, consider the great benefit of having this knowledge:  (1) scientists have identified this species and (2) predicted its expansionary behavior before it has happened so we can do something about it.  So what about all of the other species we don’t know about? 

And what if I told you that scientists are predicting one third of the world’s species will be extinct by 2100?   As Dr. Paul Hebert from the University of Guelph likened this phenomenon, “…imagine if astronomers predicted ‘last light’ for a third of the luminescent objects in the universe within a human lifetime.”  Wouldn’t you want some record of what those things were before they went extinct?  Or how will these extinct species impact your local environment, the places you visit, or the food you eat?  Or better yet, wouldn’t you want to know how you might be able to save some of these species?   

So far I’ve posed a lot of questions, but I would like to discuss some possible answers that combine biology, technology, and SAP’s love of Big Data.  The International Barcode of Life Project (iBOL) has been working almost 10 years on building a DNA-based barcode identification system for all multi-cellular life.  Lead by the Biodiversity Institute of Ontario at the University of Guelph, this project includes more than 25 countries.  With over 2.2 million barcodes to-date, it is estimated that when complete, the barcode library for the entire animal kingdom will be 50 times its current size[iii].  iBOL represents a very systematic way of defining and tracking species.  Even better, this information is accessible online via the Barcode of Life Data (BOLD) Systems. 

Now I’ve mentioned biology and some technology, so what about Big Data and SAP?  The question really is why would SAP be interested in bioinformatics?  Well, we already know that SAP HANA can help life science companies offering genome analytic services process data in minutes.  Did I also mention that BOLD data and future sequencing data is in fact really really big. In fact, one sample can yield anywhere between 350MB to 8GB of data.  The SAP fit seems obvious. 

How many people can say that they get to collaborate with leading experts in bioinformatics to help revolutionize how humanity interacts with biodiversity?  Mosquito monitoring is really only the tip of the iceberg on what this data can do for the human race and I’m proud that our Emerging Technologies team in SAP Waterloo is a part of this greater global initiative.    


[i] http://www.plosone.org/article/info:doi/10.1371/journal.pone.0060874

[ii] Ibid.

[iii] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1890991/

Win a Beautiful Tory Burch Bag, a Copy of Naima Mora's Book, as Well as a Levi's Gift Certificate!
Blog by: Layla Sabourian

SAP HANA is marking its presence in many fashionable circles, and the Art Institute of San Francisco is hosting our upcoming panel on “What are the Secret Ingredients to a successful fashion app”, featuring My Runway this Thursday.

This unique panel engages innovative and game-changing minds in fashion, business, media, and technology about how the definition of fashion is changing, and how brands leveraging
innovative technologies to reach consumers can gain a competitive advantage over others.

Some of the distinguished panelists include Naima Mora (the winner of America’s Next Top Model)and Forbes contributor and “lifecaster” Sarah Austin; SAP vice president of design and frontline applications Li Gong; author, entrepreneur and publisher/editor of Adventures in the Stiletto Jungle Stephanie Rahlfs; and director of marketing at Levi Strauss & Co. Paul Friedland.

We look forward to welcoming you on Thursday, August 15th, in San Francisco, and also welcome your support as a volunteer.


10 UN Building/San Francisco Art Institute. 5:30 – 8:30 pm, Register here.

Don’t want to drive to the city? Join us via livestream by visiting our Youtube channel: www.youtube.com/myrunwayapp

Exciting Twitter Contest!

Win a Tory Burch bag or a copy of Naima Mora's book, "Model Behavior". It is very easy to participate:


Send a tweet @MyRunwayApp answering the question, “What Do You Want to See in a Fashion App?”

Include include the #SAPContest and #Toryburch hashtags


We will select the best tweets based upon originality of the idea, buzz, and ease of implementation in a fashion app! Please review the contest rules here.


Win a $75 Levi's Gift Certificate with My Runway!

We want you to show off your inner stylist! Recommend any Levi’s items on the My Runway app to our panelists Sarah Austin, Naima Mora and Stephanie Rahlfs for a chance to win a Levi’s Gift Certificate! There are four simple steps to participate:


1) Download the My Runway App from the iTunes Store or get it on Google Play: http://www.myrunwayapp.com


2) Click on Brand to see the list of brands available on My Runway. Click on + and select Levi's.


Screen Shot 2013-08-13 at 9.31.22 PM.png



3) Select Levi’s in the Brands section and browse for an item you would like to see on one of our panelists.

Screen Shot 2013-08-13 at 8.53.46 PM.png




4) Select a product that you want to recommend, and select comment


Screen Shot 2013-08-13 at 9.31.33 PM.png




5) Tag @LaylaSabourian and the panelist that you want to recommend the item to.


  1. 1) Sarah Austin – @SarahAustin
  2. 2) Naima Mora – @NaimaMora
  3. 3) Stephanie Rahlfs - @stilettojungle


Screen Shot 2013-08-13 at 8.54.54 PM.png



Sarah, Naima and Stephanie will select a winner who has recommended their favorite look/items! Each winner will receive a $75 Gift Certificate from Levi's!

This post originally published on SCN Blogs and republished with permission.

We are looking for feedback from the developer community as we continue to enhance the developer capabilities of SAP HANA! Please take a moment to answer the following survey (2 questions only).  After reviewing your feedback, SAP may decide to build certain functional service(s) open source package(s) on HANA,. Should SAP build any service, SAP will then publish the applicable functional services with open interface design(s) and make them available for use.


Take survey: http://bit.ly/HANAPFS  (Ends Aug. 30, 2013)


What do we mean by functional service?


As you may know, SAP HANA is SAP’s in-memory computing platform for real-time applications. It allows you to instantly access huge amounts of structured and unstructured data from different sources and get immediate answers to complex queries.


As shown in the diagram below, SAP HANA offers platform, database and data processing capabilities and provides libraries for predictive, planning, text processing, spatial, and business analytics.

hana architecture.png


Functional services are additional pre-built features that we want to provide to help you jump start your applications on SAP HANA. We want to offer you functional services with open interfaces that can be easily deployed to your SAP HANA instance and speed up your development process. A good example of a functional service is ‘text search.’  With this service you would be able to quickly build an app that allows users to search for content within different kinds of unstructured data (e.g. binary, pdf, docx, etc.).


Our objective with the survey is to gather your input and ideas about functional services that can make your development process easier and help your apps provide advanced capabilities. All ideas are welcome! Aside from taking the survey, please also feel free to share your ideas or questions in the comments area below – we may reach out to you after the survey ends to help us design the services and interfaces!


So, what are you waiting for? Tell us what kind of functional services you need or would like SAP HANA to provide! 


Take survey: http://bit.ly/HANAPFS  (Ends Aug. 30, 2013)

Join us for the first SAP Big Data Chat with Steve Lucas, President of SAP Platform Solutions and Timo Elliott, SAP Innovation Evangelist.


In this dynamic and interactive discussion you'll hear how SAP makes Big Data real.

  • Get real-world examples of SAP Big Data providing real business value today
  • Learn how to gain valuable insights from your data immediately instead of days or weeks later
  • Hear how our customers are using SAP today to predict future events, minimizing risk and optimizing opportunities



When: Wednesday, August 21, 8am PT / 11am ET / 5pm CET

Where: Watch via the live video stream (once the event starts) and chat via Twitter #SAPChat



How to Join the Conversation


  • Watch the live video stream on the SAP Community Network via Google Hangout On Air at http://scn.sap.com/welcome
  • Don’t worry, you don’t need to have a Google+ account to watch the live stream. At 8am PT, simply click the above link (works with any browser) to watch and listen to Steve and Timo's conversation.

Chat and Ask Questions

  • The tweetchat is at #SAPchat. Join the conversation and tweet your comments/questions about #BIgData while you watch the live video stream - using the #SAPChat hashtag.
  • What’s a tweetchat? It’s a real-time “meeting” on Twitter, 140 characters at a time. Simply log in to Twitter (you can create an account at www.twitter.com) and include the hashtag #SAPchat along with your comments and questions. You’ll be in the conversation! Watch the stream of tweets with any Twitter app, such as Tweetdeck, Hootsuite or Tweetchat.com.



SAP Big Data Chat Speakers

steve crop.PNGSteve Lucas

President, SAP Platform Solutions



Steve Lucas is the global president of SAP Platform Solutions, leading the go-to-market teams and strategy for Analytics, Database & Technology, Mobile, and Big Data. With his primary focus on market acceleration and adoption in SAP’s core innovation areas, Lucas develops and executes strategy across all markets and ensures operational excellence within the global GTM teams.

Prior to his current role, Lucas served as Global EVP and GM for the Database & Technology Business, leading a team of professionals in field sales, enablement, solutions management, and marketing across a diverse portfolio of products. Before that, he was global GM for the Business Analytics & Technology organization—responsible for strategy and go-to-market activities across sales, marketing, and product management lines of business.

Lucas led the successful introduction of SAP HANA to the market in 2011 and continues to lead the fast-growing and dynamic HANA business, which now includes Business Suite on HANA and HANA Enterprise Cloud.

Lucas joined SAP in 2007 when the company acquired Business Objects. As vice president and general manager with Business Objects, Lucas launched and led the company’s “OnDemand” business unit. Before that, he managed the Enterprise Information Management (EIM) group, overseeing the sales of software for data integration and management as well as supporting key acquisitions for Business Objects in the EIM segment. In addition, he managed partner organizations at Business Objects, including the OEM and Distribution businesses. 

Lucas became part of Business Objects in 2003 when the company acquired Crystal Decisions. While at Crystal Decisions, he held various senior positions, including leading the strategic presales team for North America.

During a brief sojourn from SAP, Lucas joined Salesforce.com as SVP responsible for furthering adoption of the Force.com cloud computing platform by customers, partners, and developers. He re-joined SAP in 2009. 

Early in his career, Lucas held roles in field sales management at Software Spectrum, a Microsoft large account reseller. He also worked in field marketing and technical sales for Microsoft. 

Lucas holds a bachelor’s degree from the University of Colorado and has published several books on business intelligence. 



timo-elliott-square-300x300.jpgTimo Elliott

SAP Innovation Evangelist


Timo Elliott is an innovation evangelist and international conference speaker who presents to IT and business audiences around the world on the latest trends in information strategy and technology.



His popular Business Analytics blog at timoelliott.com tracks innovation in analytics and social media, including topics such as big data, collaborative decision-making, and social analytics.


A 22-year veteran of SAP BusinessObjects, Elliott works closely with SAP research and innovation centers around the world on new technology prototypes. His PowerPoint Twitter Tools lets presenters see and react to tweets in real time, embedded directly within their slides. He has a draft US patent in the area of augmented reality analytics.


Prior to Business Objects, Elliott was a computer consultant in Hong Kong and led analytics projects for Shell in New Zealand. He holds a first-class honors degree in Economics with Statistics from Bristol University, England.

Back in December, I introduced you Table UDFs in HANA 1.0 SP5.  At that time, I also mentioned that we are working on implementing Scalar UDFs as well.   Today, I am very happy to announce that as of HANA 1.0 SP6(Rev 60), we now support Scalar UDFs as well.  Scalar UDFs are user-defined functions which accept multiple input parameters and result exactly one scalar value.  These functions allow the developer to encapsulate complex algorithms into manageable, reusable code which can then be nested within the field list of a SELECT statement.  If you have worked with scalar UDFs with other databases, you know how powerful they can be.  Below is an example showing how to create two scalar UDFs, and then leveraging both within the field list of a SELECT statement.  This is a very simplistic example, and of course the logic can be done by other means, I just wanted to remove any complexity of logic and focus purely on the syntax.


CREATE FUNCTION add_surcharge(im_var1 decimal(15,2), im_var2 decimal(15,2))

RETURNS result decimal(15,2)




result := :im_var1 + :im_var2;




CREATE FUNCTION apply_discount(im_var1 decimal(15,2), im_var2 decimal(15,2))

RETURNS result decimal(15,2)




result := :im_var1 - ( :im_var1 * :im_var2 );



Once you execute the CREATE statements in the SQL Console, the new objects will show up on the catalog in the “Functions” folder.



As shown below, you can now use the functions in the field list of your SELECT statements.



Again, this is a pretty simple example, but I think you can see how powerful a tool scalar UDFs could be to a developer.   Currently, both table and scalar UDFs can only be created via the SQL Console, but rest assured we are working to allow the creation of these artifacts in the HANA repository via an XS Project.

SAP HANA Enterprise Cloud has been launched earlier this year following extensive work on Peta-byte scale SAP HANA infrastructure in our labs, working together with customers and partners. The outcome is a robust and elastic cloud based approach to managing mission critical SAP ERP or CRM systems that are powered by SAP HANA. – Being part of the launch team and closely involved with the offering, I like to explain some of the key qualities and differentiating characteristics of the offering in this blog.

SAP’s unique position in the cloud market

Transforming SAP into more and more of a cloud company is part of the strategic goals at SAP. Acquisitions of SuccessFactors and Ariba have clearly set the stage and are tightly related to our existing enterprise application business as opposed to entering any un-related cloud markets.

The same applies to SAP HANA Enterprise Cloud, which simply stated consists of:

  • A breakthrough cloud infrastructure (previously known as “petabyte farm”), that is rethought for in-memory architecture and optimized for modern trends in storage, networking and compute layers
  • A breakthrough cloud platform, SAP HANA Cloud Platform, as the foundation to run modern applications and analytics
  • Running mission critical applications such as SAP Business Suite, SAP Business Warehouse and several big data applications delivered as a managed cloud service. These services help customers assess, migrate and run rich applications with cloud simplicity

The combination of these three ingredients adds up to unique benefits for our customers:

First, a customer no longer needs to choose between leveraging SAP HANA to gain real-time business advantage or move applications to the cloud for increased IT simplicity. Now customers can have both.

Second, the additional choice of a cloud deployment model for SAP HANA is just one of several additional options to accelerate the journey to becoming a real time business. For example, starting an SAP HANA based system in our cloud and then later bringing the final solution back on premise or moving into a hybrid deployment model (test and dev in the cloud and prod on-premise) are all available choices. Especially the related managed services offering, described further below, makes a strong case for such a scenario. Net net, we can bring customers productive in a much shorter period of time and then run the infrastructure in an elastic fashion.

Third, the entire offering is backed by SAP’s experience in managing and supporting mission critical systems in a continuous fashion including high availability (HA) and disaster recovery (DR), resulting in an overall turnkey solution.



Customer requirements for SAP HANA Enterprise Cloud

Before going into detailed capabilities, I like to first share what customer input and feedback we have gathered before and since launching the SAP HANA Enterprise Cloud offering. This feedback was used in defining and building out the offering. And not to lead with the conclusion, some of these customer expectations and requirements are opposed to purist cloud views. However, not every cloud is alike and a solution always has to follow the use case. In the case of SAP HANA Enterprise Cloud, the use case is to run mission critical business systems that form the logistical backbone of often multi-billion companies.

Single vs multi-tenancy

Multi tenancy is part of one of the five cloud criteria in the NIST cloud definition and a standard element of a single point of view type application like SuccessFactors or Salesforce. It makes a lot of sense and is technically feasible.

For the SAP HANA Enterprise Cloud use case of providing highly customer specific SAP systems in the cloud, multi-tenancy is not preferred by our customers and is technically challenging to achieve. Reason being that despite SAP delivering standard software with standard functionality, most customers have made the solution even better through adaptations (and worse in terms of things like upgrades and multi-tenancy). But this is the nature of the beast and stops nobody to move to a cloud based model as long as the cloud delivers the overall value.

Customer’s preference for single isolated tenancy did not surprise anybody at SAP. Please also read this blog on the same point by industry expert Dennis Howlett.

Built-in upgrade to the latest release version

The second key customer feedback however did surprise me and my colleagues a little, especially the firmness in this statement: Customers do want to be in control of the upgrade cycle of their system and NOT be automatically upgraded to the latest release level. Period. - Hence, despite the fact that large SAP upgrades are a thing of the past for many of our customers by applying smaller support packs in an ongoing fashion and our readiness to offer this as a built-in service in SAP HANA Enterprise Cloud, our customers told us to definitely not make this the default. So we listened, the customer stays in control of his timeline, but once he pulls the trigger for a smaller or larger upgrade, our industrialized approach guides the process with maximum speed and efficiency.


Key capabilities

What sets HANA Enterprise Cloud apart is its focus on real-time computing based on the SAP HANA platform. This is where SAP excels and no other vendor has the combined experience of in-memory compute resources, SAP HANA database, business applications, services and mission critical support. This is our sweet spot and the value defined for our customers is not only defined by the value the cloud brings, but the value of moving to a real-time business model with SAP HANA and making the most out of this opportunity. How do we give customers maximum freedom to focus on their business while we focus on their systems?

Managed services

Reducing lower level IT effort and focusing it on business innovation instead is one of the big cloud goals and where SAP HANA Enterprise Cloud shines. We are offering our customers the richest, most relevant and effective “factory” like services in the industry. On the lower end, we can migrate an existing SAP ERP system to SAP HANA within one to five business days. And while in a day we cannot provide a fully corrected and tested HANA based system, this helps our customers to take most of the guess work out of planning a complicated system migration project. Hence, it saves customers weeks to month of valuable time.

On the high end, we are able to build highly customer specific pre-configured systems including industry specific content, best practice content as well as existing customer master data and process configurations. This creates the ideal starting point for any net new SAP HANA customer or for some of our customers that are radically re-thinking their existing business processes. – Time to set this up is between one to four weeks, which for a complex ERP system is extremely cloudy.

Pooled elastic resources

SAP HANA Enterprise Cloud consists of possible the largest single pool of in-memory compute capacity. Based on SAP Cloud Frame, our cloud management tool (see graphic and topic of a future blog), we are able to dynamically allocate resources and assemble them into a customer cloud cell within minutes. We can do this across any type of in-memory server independent of brand, make or generation. This combined with our Software Defined Network (SDN), we can provide the appropriate network bandwidth in an elastic fashion as well.

From a business cycle point of view, our customer can come up with their own demand plan according to their business cycles that we will then automatically provision for them or can choose defined system load thresholds to kick off automatic provisioning additional compute and storage capacity.


Cloud Frame API.png

Diagram: API driven cloud management tool SAP Cloud Frame


Mission critical support

Lastly, we take care of your system and you can almost treat it as a black box. We will monitor health and performance in a 24x7 fashion including clearly defined SLAs, high availability, disaster recovery and mission critical support. SLA based performance of your system and as well as our management activities are reported back to you in the customer’s desired frequency.


SAP HANA Enterprise Cloud is a huge step in reducing time and effort it takes to manage a mission critical business application like SAP ERP as well as related migration to SAP HANA. It provides the key cloud value proposition of speed and flexibility AND provides an overall platform that enable business at the speed of thought. HANA Enterprise Cloud customers are in a unique position to turn the time and effort saved by the cloud and turning it into tangible business differentiation and innovation in the real-time business world.

Please share your thoughts or questions on general cloud topics, SAP HANA Enterprise Cloud or any other related topic. I will make sure to reply!

Attention all IT professionals interested in in-depth education on development and modeling with SAP HANA. In case you didn't see this announcement over on SCN, the SAP HANA Distinguished Engineers will be holding a full-day technical education event on August 20th, 2013, at Medtronic in Mounds View, Minnesota (near Minneapolis). I am re-posting the announcement here:


We are excited to announce the first stop along the Road to SAP HANA Expertise, with your guides, the SAP HANA Distinguished Engineers.

The Road to HANA: It's a beautiful thing!



This is the perfect opportunity to join us in person at this free event, hosted by Medtronic in Mounds View, MN. Virtual access will also be available (stay tuned for details on that).


This valuable, information-filled day will include SAP HANA Modeling and Development in depth, with your enthusiastic and knowledgeable guides, HANA Distinguished EngineersWerner Steyn, Rich Heilman, Thomas Jung and Kiran Musunuru!


Here is what you will encounter at this juncture in your journey:


SAP HANA Modeling


In this session, participants will gain insights around advanced SAP HANA modeling practices. There are a wide variety of approaches to create content in SAP HANA; with options comes not only flexibility but also responsibility.  Whether you are an SAP HANA XS developer, accessing SAP HANA through ABAP or creating Analytical applications the aim of this session is to make you aware of some of the different possibilities and how they can best be used to solve business requirements.


Topics for this session include

·         Overview of the various modeling artifacts

·         General recommendations, lessons learned, tips and tricks

·         Several real world examples & demos


SAP HANA Native Development


In this session, we will introduce various techniques for building applications on SAP HANA. We will explore OData and server-side javascript services and show you how to push down data centric processing logic into the SAP HANA database via the usage of SQLScript. Explore both major application server layers, ABAP and HANA Application Services, and how they can each be used to build or adapt applications to leverage the power of SAP HANA.


Topics for this session include

·         XS Overview & Architecture

·         Development Environment

·         Odata and Server side Javascript services

·         SQLScript Procedures & Debugging

·         Development aspects of ABAP on HANA


Whether you'll be there in person or online, if you want to reach HANA technical enlightenment, your participation is required! Please sign up in advance here: Registration Form.


The Road to SAP HANA Expertise starts in the Twin Cities. Don't be left behind!

With SAP HANA 1.0 SP6(Rev 60), we can now leverage the concept of dynamic filters.   There have been several requests for this type of functionality, since SAP does not recommend the use of dynamic SQL(EXEC statement) when developing SQLScript procedures.  We now have a new statement in SQLScript called APPLY_FILTER.  This statement accepts two parameters.  The first parameter is the dataset in which you want to apply the filter.  This dataset can be a database table, database view, HANA attribute or calculation view, or even an intermediate table variable.  The second parameter is of course the filter condition itself. This would be very similar syntax that you would use in the WHERE clause of a SELECT statement.   In the following example, I have a SQLScript procedure which simply reads data from the “Products” table and applies a filter which is passed as an input parameter to the procedure.  The result set then shows the filtered dataset.


CREATE PROCEDURE get_products_by_filter(

            IN im_filter_string VARCHAR(5000),

            out ex_products “SAP_HANA_EPM_DEMO"."sap.hana.democontent.epm.data::products" )







ex_products =


                   :im_filter_string) ;




SAP HANA has been around for a little over 2 years now. Its new vision and approach, as with many things novel and paradigm shifting, can be met with resistance and disbelief. Change is not always easy and many who are comfortable with existing approaches try to evaluate HANA in the light of those older paradigms. Hence they often fail to see the bigger opportunity – the opportunity to re-think how we fundamentally approach data processing architecture.


In the days ahead, you will see a new feature here on saphana.com entitled “The HANA Difference” focused on SAP HANA’s defining capabilities. Within this feature you will see a variety of perspectives on these defining capabilities from SAP, customers, and from our ecosystem of experts. The discussions will focus on how HANA optimizes information processing,, backed up by examples, demos or code snippets. These examples will illustrate the impact of SAP HANA in simplifying application design and data management, provide deeper understanding on how SAP, customers and partners are applying them, and how these capabilities translate into a very different way of assessing value for those who deploy it.


This blog highlights few of SAP HANA’s defining capabilities that will be the focus of detailed blogs in the coming series, which we hope you will discuss along with us. As indicated, there is a HANA Difference. Let’s look at some of these unique HANA capabilities.


  • SAP HANA’s optimized cache awareness. HANA’s optimizer knows the locality of the data in the CPU L1, L2 and L3 caches on the CPUs. It leverages this information along with knowledge of all the bandwidth and latencies between every cache, every CPU, and every node in its execution plans.  This accelerates repetitive queries to mere milliseconds revolutionizing how information is digested by users.


  • SAP HANA can run real-time OLTP and OLAP on a single copy of the data so there is no need for an OLTP and an OLAP copy. SAP HANA stores data only once for transactional and analytical applications –transforming how businesses build and consume applications. Significantly cutting down cost for permanent storage and, the time, cost and effort involved in ETL, backup and archiving.


  • Multiple Parallel Processing (MPP) on a shared nothing architecture along with its support for Single instruction, multiple data (SIMD), is another unique capability. SAP HANA breaks the work process into sub components and distributes these combining the storage and query processing capacities of several servers in a cluster. This increase capacity and reduces response times enabling increased the use of the system by end users.


  • Despite the fact that SAP HANA is supporting multiple domain-specific data processing capabilities, it has not sacrificed on ACID compliance or Reliability, Full High Availability, Disaster Recovery and Supportability. What we have done is to take the opportunity to re-think how these requirements can be further enhanced and simplified because of the HANA’s cache-memory aware architecture


  • SAP HANA’s on-the-fly schema extension capability allows for flexible business model changes creating another strong differentiator and business value.

  • Dynamic Data Tiering optimizes the balance between data processing and data storage. Data identified as frequently used is kept in memory. Any data that hasn’t been accessed recently is purged from memory but persisted on-disk with no write-back is needed to preserve it. And cold infrequently used data can be persisted in systems like SAP Sybase IQ or Hadoop but is still made accessible dynamically via data virtualization using SAP HANA smart data access. This dramatically reduces the cost of permanent storage and the time and effort for data movement and ETL.


  • SAP HANA is more than just a database. It converges, platform, database, data processing capabilities, handles spatial and textual data analysis, and provides libraries for predictive, planning and business analytics. To do what you can do in one HANA appliance you would need several separate individually purchased, supported and maintained comments for competing systems. Now that is simplification and cost reduction.


  • SAP HANA comes with the most comprehensive data provisioning allowing data to come from any source to benefit from the real-time performance of true in-memory computing. Synchronization of mobile and machine data, analysis of streaming or sensor data, data virtualization leveraging the unique processing capabilities of the source system, batch loading, real-time replication and in-memory massively parallel transforms.


  • SAP HANA is open and agnostic and is available for any application, data and source, giving you the flexibility and adaptability you need. It runs on commodity x86 based hardware from 9 HW partners and 7 cloud infrastructure providers. It supports a wide variety of programming languages, integrate with certified 3rd party tools. Numerous ISV and StartUp apps are deployed on HANA without modification.  It is data agnostic with support for structured, unstructured (i.e. text), spatial, document data and sparse data


  • Extreme linear scalability is another unique SAP HANA capability that sets it apart. Let’s see someone else do this. Year over year trending report for top 100 customers over 5 years – 1,200 Billion rows in 3.1 seconds. How can real-time insight like this transform how you see and do business?




Of course, what is listed above is not the complete exhaustive list. There are many more topics important for technologists of all shades as well as for others in the ecosystem. We invite you to suggest additional topics that you feel it would be of value to discuss, or on which you feel more clarity would be of benefit. Anything “hot” and top-of-mind – as it pertains to true real-time in-memory platforms.


Remember, the discussion around SAP HANA and related matters is not just a discussion about databases but how to introduce your organization to the renaissance of computing, and drive real business success therefrom.

One of the new features in SAP HANA 1.0 SP6(Rev 60) is the ability to create procedures based on a procedure template.  Procedure templates allow you to create procedures with a specific interface(input and output parameters) but can contain generic coding leveraging placeholders or template parameters.  Currently only a subset of these placeholders can be used.  For example, you can create template parameters for a schema value, a column field name, a table or view name, or a name of a procedure.  In order to create a procedure template from the HANA studio, choose “New” then “File”.




In the following dialog, enter the name of the procedure template and add the file extension .proceduretemplate.


The procedure template editor allows you to define the template parameters, as well as the template script. In this example, I am creating a template which simply gets the number of rows from a table.  The table name will be inserted from the template parameter called “table”.  You will notice that I reference this parameter in my code by using angle brackets(< >).  You can give any name to the parameter as long as  you reference it with the same exact name and wrapped in these brackets. Again, you can only use these parameters in certain situations, like when specifying a schema, column field name, table name, or  procedure name.




Now that you have a procedure template, you can create a procedure based on that template.  You can do this from the new procedure wizard which has been introduced in SP6 as well.  From your project, choose “New”, then “Other”.  In the SAP HANA Development folder, you will see an artifact called SQLScript Procedure. Choose this and click “Next”.




Enter the name of the procedure.  There is no need to type the .procedure file extension here. The wizard will add it for you automatically when you navigate out of this field.  Click the “Advanced” button.  Here you can specify the name of the procedure template which you would like to use to create your procedure from.




The procedure editor will allow you to define the values for the procedure templates. In this example, I am simply specifying the products table.




The runtime object which is generated in the _SYS_BIC schema will have the source code from the template and the values for the template parameters inserted accordingly.   If you were to change the template at any point, all procedures created based on this template would be updated and activated automatically.




Of course we can call this procedure from the SQL Console and the result set, which is the count from the products table, is shown.




So this feature has been introduced to help developers become more efficient, and have less redundancy in their coding by using templates to create procedures with very similar structures in both the interface as well as the code itself.  Check out the video demonstration on the SAP HANA Academy.

News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.


Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions: