Home > Blog > Blog
1 2 3 52 Previous Next

Blog

776 Posts

Last week we held an exclusive event, SAP HANA 4 IoT, for the SAP HANA LinkedIn Group Community focusing on various topics across SAP HANA and the Internet of Things (IoT). We had about 200 attendees online and 65 live at the SAP Office in Palo Alto.

The event started with introducing the SAP HANA Strategy and updates and jumped right into IoT. Here are some snapshots of the event: 


IOT event 3.JPG

IoT event 4.JPG

IOT event 2.JPG    

I want to thank everyone who made this SAP HANA 4 IoT event possible. Special thanks goes to our sponsor Cisco and speakers, Prakash Darji, Yuvaraj Athur Raghuvir, Krishna Balaji, Hari Guleria, Amr El Meleegy, Oliver Mainka and Scott Feldman for making this community event successful. Finally, a big shout out to all our participants for attending on-site and virtually. Your attendance and engagement made this event fun and your feedback is certainly appreciated.

If you missed the event, no worries, here are downloadable materials from the event:


A.   Here is our event AGENDA: 

HANA4IoT – Nov 11th, 2014 AGENDA

Start

End

Topic

Who

12:30PM

1:00PM

Arrive/ registration/Coffee

Registration

1:00PM

1:15PM

Welcome and start up

Scott Feldman/ Hari Guleria

1:15

1:45

Keynote –SAP HANA Strategy

Prakash Darji

1:45

2:15

SAP Internet of things for Business

Yuvaraj Athur Raghuvir

2:15

2:45

SAP HANA –SP 9 & Cloud Updates

Krishna Balaji

2:45

3:00

Coffee and drinks break

<break>

3:00

3:30

HANA 4 IoT – Competitive Differentiator

Hari Guleria

3:30

4:00

Suite on HANA- Current Developments

Amr El Meleegy

4:00

4:30

HANA Demo – Predictive Maintenence

Oliver Mainka

4:30

5:00

SAP HANA Q&A Session to Speakers

(Speakers)

5:00

5:15

Wrap Up - Floor Drawings and prizes (Must be present to win)

5:15

6:00

HAPPY HOUR for Networking with Wine, Beer and bites


B. Attached video links to the event

HANA4IoT - VIDEO LINKS

Event

Dur

Link

Welcome & Start Up: Scott Feldman & Hari Guleria

13 min

https://cisco.webex.com/ciscosales/lsr.php?RCID=be3c8f3115824f4ca9a8efd4dfb565fa

What is the Internet of Things – Video

4 min

Link: http://youtu.be/DbA19nJm5Jo 

SAP HANA Strategy – Prakash Darji – Key Note

14 min

https://cisco.webex.com/ciscosales/lsr.php?RCID=44c16ab0b7774f07b5621773a8c78e2c

SAP HANA Strategy – Prakash Darji- Q&A

13 min

https://cisco.webex.com/ciscosales/lsr.php?RCID=591fe46b20a747cbb4c80949848da35a

SAP Internet of Things for Business – Yuvaraj Athur Raghuvir

35 min

https://cisco.webex.com/ciscosales/lsr.php?RCID=7711d4a79ec64b8d9db950dcde7a241e

SAP HANA – SP9 & Cloud Updates –Balaji Krishna

46 min

https://cisco.webex.com/ciscosales/lsr.php?RCID=e4cec6a81fd244848b4bed73a349ab1e

HANA 4 IOT – Competitive Differentiator – Hari Guleria

32 min

https://cisco.webex.com/ciscosales/lsr.php?RCID=4344e151ac53443b8f17da60bfed56ae

How to build your own IoT App on iOS – Hari & Tim

4 min

http://youtu.be/nXimRH7TtuQ

Suite on HANA – Current Developments - Amr El Meleegy

29 min

https://cisco.webex.com/ciscosales/lsr.php?RCID=c68ca37a7b4e4a5ab1dc718cf105339e

Predictive Maintenance & Service Live SAP HANA4IoT Demo – Oliver Mainka

30 min

https://cisco.webex.com/ciscosales/lsr.php?RCID=9563e8e10e784fe3baa90b39f21c1567

HANA @ Cisco – DISE – Doug Wilson

12 Min

https://cisco.webex.com/ciscosales/lsr.php?RCID=1ba9958d4dff4d63971f3e54f1361b63

SAP HANA Q&A Wrap-up

26 min

https://cisco.webex.com/ciscosales/lsr.php?RCID=7628d22196694f0c95652d1813b13b1f

 

For questions/ further details email: scott.feldman@sap.com; hari.guleria@pridevel.com

Follow us on Twitter: @sfeldman0 & @HariGuleria

Feedback form for remote viewers: https://docs.google.com/forms/d/1_wJFy9mPlOupiDcCANs8qNFpiAEolhE8-viwXoCLRGQ/viewform?c=0&w=1&usp=mail_form_link


C.   Attached below are the pdf links to all the decks presented

HANA4IoT – Session Decks (pdf)-

In my previous blog, I explained that SAP HANA brings a new approach to data management, the in-memory first approach. I have also mentioned that several customers are already experiencing the performance gains and simplification with SAP HANA and Forrester Research has written about savings companies can achieve with it. For a detailed understanding on how SAP HANA can be applied to your business, this interactive document has many resources to help you determine the SAP HANA fit for your company. 


Back to SAP HANA’s data management approach… with SAP HANA, all data is in a columnar format and in memory by default. This single data copy is used for both transactions and analytics. To be clear, saying that all data is in memory does not mean that you can’t have some of the warm or cold data moved to disk or remote system and still access it when you need. Dynamic tiering option in SPS09, near-line storage (NLS) option and smart data access will allow that.


With the clarification out of the way, let me share my next 5 questions. As I did in my previous blog, I am answering them for SAP HANA.


6. With DRAM prices plunging, should businesses continue to use legacy disk-based databases?

You don’t have to. SAP HANA is an ANSI SQL-compliant, in-memory platform designed to take advantage of the latest hardware and in-memory technology innovations.  SAP HANA does use disk to persist data so that in case of power failure or disaster data can be restored, but think of disk for HANA as a new form of tape backup system.


7. Do SAP Applications run better in a true in-memory solution?

Yes, most of the SAP applications are already optimized to run better on SAP HANA because their business logic now runs inside SAP HANA. SAP is working on optimizing remaining applications which are not yet optimized to run on SAP HANA. Reports and business logic that depends on aggregates don’t need to rely on materialized views. Instead, they can aggregate up-to-date information, on the fly.  Think about that for just a second; no more pre-aggregates, no more materialized views, no more special indexes to get performance gains with SAP HANA.


8. Is my IT landscape simpler and less costly in the long run with a true in-memory solution?

Yes, with SAP HANA’s modern design, you can run more business logic close to the data, run transactions and analytics on the same database instance, and even avoid using an application server in some cases. SAP HANA is an integrated platform that processes almost all data types, including structured, unstructured, text, text in binary files, streaming data and spatial data. Additionally, with columnar tables, advanced compression, no materialized views, and the ability to handle different workloads on single copy of data, more data can be efficiently processed in memory.


9. If I want to run transactions and analytics on the same system, do I need more DRAM and CPU resources?

SAP HANA minimizes the use of DRAM maintaining single data copy for transactional and analytical workloads and using advanced data compression to store data in less space than its actual size. As far as CPU is concerned, SAP HANA often operates on compressed data, avoiding unnecessary compression-decompression operations. Additionally it doesn't need CPU cycles for synchronizing multiple data copies, converting between rows and columnar formats or redirecting workloads between disk and in memory stores.


10. Who has the leading true in-memory database?

SAP HANA is a true in-memory database with a proven track record at thousands of customers. Several customers are live with SAP Business Suite, one of the most demanding and mission critical enterprise transactional applications, on SAP HANA.  Additionally, many ISVs and hundreds of start-ups are developing application using SAP HANA.


These questions complete the list of my top 10 questions and answers from SAP HANA side.

 

What questions do you have? Let me know, and I will try to answer them in future blogs.


Learn more about SAP HANA.

susecon.jpg

SAP is a Cornerstone level sponsor at next week's SUSECon '14

 

SUSECon '14 takes place next week at the Hyatt Regency Grand Cypress in Orlando, Florida.

 

 

Dan Lahl, Vice President, Product Marketing, SAP will participate in the opening keynote 9amET on Tuesday, November 18.

 

SAP will be in booth #314 in the Technology Showcase, Tuesday-Thursday.  Be sure to stop by and speak with experts about SAP HANA Platform, SAP HANA Cloud Platform, and SAP Data Management as well as SAP Simpler Database Choice.

 

Join SAP for these informative sessions:

 

Wednesday, November 19:

  • "Calling All Developers:  SAP HANA Cloud Platform (PaaS) is Open for Your Apps"

          Presenter:  Floyd Strimling, Sr. Director SAP HANA Cloud, SAP

          10:50-11:50am in Room Palm AB

 

 

Thursday, November 20:

  • "High Availability and Disaster Recovery for SAP HANA with SUSE Linux Enterprise Server for SAP Applications"

          Presenter:  Uwe Heinz, Product Manager, SAP

          9:40-10:40am in Room Palm AB

 

  • "Dell Solutions For In-Memory Computing with SAP HANA Helps Acero Estrella Support Business Expansion"

          Presenter:  R. Nathan Saunders, SAP Global Alliance Manager, Dell Inc.

          10:50-11:50am in Room Palm AB

 

 

We look forward to seeing you next week in Orlando!

Steve Lucas

A Roadmap for Simple

Posted by Steve Lucas Nov 12, 2014

Our CEO, Bill McDermott, has thrown down the gauntlet on enterprise complexity so to speak and issued a challenge to every single SAP employee: Make our valuable products and solutions simple to understand, simple to discover, simple to deploy, simple to use and simple to support. Do that, he promises, and our customers will be successful. There is no more noble pursuit for SAP than that as far as I am concerned!

 

Why so much focus on simplicity? The answer is that we believe enterprise computing, in a broad sense, has become absurdly complex (both on premise and in the cloud). I should also share that we aren't blind to that fact that SAP has to lead from the front on simplification. We need to be committed to the mission of simplification for the long term.

 

Hence, my commentary here related to the pursuit of simplification spans all enterprise software worlds – on premise, the cloud and anything in between. And rather than just tell you what we plan on doing or how we are simplifying, I will provide some concrete examples of what we are delivering today that are down-payments on our promise of simple.

 

Before we get there, I think it’s worth contemplating our passion at SAP. What we LOVE doing, put simply, is to enable customers to run their businesses, end to end in the simplest and most agile manner possible. To that end, we put the people who use products and solutions from SAP first and will enable them to run their business from their phone, anywhere, anytime through the cloud. Bill articulates this often as "The Cloud Company Powered by HANA"...which in itself is a simple but profound statement. How I interpret that as an SAP employee goes something like this: "Hey Steve, you need to make our products easy to use, run and love" and I am up for that call to action!

 

How will we do that?

That depends on your point of view and what you need from SAP. On the one hand, if a business user just wants something like a simple tool to visualize data and nothing more from SAP, then said individual can go to saplumira.com, download Lumira for free and have a great experience. Done! And frankly that end user shouldn’t have to talk to anyone at SAP to have that great experience.

 

On the other hand, a company may want an application like the best HR solution available anywhere in the cloud and nothing more. Likewise, they should be able to go to successfactors.com and sign up right away.

 

But with all that said, what about companies who want more than a singular & focused product experience?


I believe most enterprise organizations want SAP to deliver a simplified, end-to-end business applications suite that is integrated with their global business network, all powered by our modern platform and easily extended or customized to suit any need. And oh by the way that experience can be had from SAP entirely in the cloud, on premise or a mixture of the two. And this, my friends, is where SAP will outshine all our rivals. Why? Because I believe there is no other company on the planet equipped to deliver this except SAP.

 

The reason I make this last point is not because it’s what SAP wants to sell, but again – because I believe it’s what customers WANT. As a case in point, think about what’s happening today in the cloud and how similar it is to the “best of breed” apps developed in the mid 90’s. The scenarios playing out in the cloud aren’t that different actually – a bunch of companies who have a core application (e.g. CRM, HR, etc.) in the cloud, trying to expand beyond their core…then customers end up with integration challenges between those apps or in the current case, clouds. Tough to run a business that way! The point is that our customers typically don't ask us to think about one thing. They ask us to think about a lot of things, including how a business, big or small, can best operate end to end.

 

Now equipped with that background, I think we are ready to discuss the roadmap for Simple. I’ve chosen to break this down into three areas:

 

  1. Simplifying SAP's Focus & Portfolio
  2. Simplifying Enterprise Infrastructure & Applications
  3. Simplifying Your Experience & Consumption

 

1. Simplifying SAP's Focus & Portfolio

With thousands of products produced by SAP, it’s challenging to take it all in as well as make sense of what you need and when you need it. To that end, we’ve made the decision to simplify our product roadmap and focus on three key areas – Applications, Network and Platform. It makes sense to do this as our customers have needs that span all of these areas, but most often it’s the consumer of these technologies that is very different. For our business apps and network, generally speaking it's business users and for platform technologies it's IT (with admitted exceptions). This thinking tightens our aperture and gives us the ability to focus on what business challenge we are trying to solve, who buys products from SAP and why. Hence, we are on a journey to not only innovate but to simplify our apps, network and platform portfolio.

 

As a side note, this also enables us to foster conversations between customers with similar interests. Our wish is that the SAP user group communities will embrace this categorization and focus education/enablement/dialog efforts around those areas to better align.


2. Simplifying Enterprise Infrastructure & Applications

Infrastructure-wise, I know you are all expecting me to use the “SAP HANA Platform” as an example of simplification – and why not? We massively simplified the required enterprise infrastructure necessary to develop or run enterprise business applications on-premise by consolidating database, analytics, application processing, planning, text processing, predictive, and data management all on one platform with one copy of your data. This means you don’t need to buy and integrate all those parts separately. For the record, every SAP application we build already runs or will eventually run on the HANA platform, including our cloud apps like SuccessFactors. (Should any detractor state anything to the contrary, please refer them to this blog or me directly)


On the application side of the business, we are leveraging HANA to expand breadth & capability yet reduce complexity in core applications...and any time you hear SAP refer to "s" innovations - Simple Finance, Simple Logistics, etc., you should assume SAP has taken a business application like Logistics and created compelling new capabilities for that business area on HANA as well as re-written some of the existing code natively on HANA to be more "svelte". More importantly you should assume that these innovations and simplifications are NOT available on third party database products. Once a CFO spends some quality time understanding Simple Finance from SAP, any question she might have had regarding "why HANA" will quickly disappear.

 

What's so great about Simple Finance? With it we will give every company on the planet the ability to know it's cash balance on hand with precision at any moment vs. just at month or quarter end. This will dramatically impact the market in a profound and positive way. Like I said, CFO's love Simple Finance from SAP (this is awesome BTW)!

 

Speaking of CFO's, I'd be remiss if I didn't at least mention that we offer managed Infrastructure as a Service, known as HANA Enterprise Cloud (commonly known as "HEC"). This is a data enter network we've developed in partnership with some pretty brilliant companies like IBM, HP etc., that simplifies your operations by allowing you to move your existing SAP applications investment in it's entirety to these managed data center(s). I don't want to get caught up in the exercise of labeling what HEC is, so I will just tell you that it's cheaper to run SAP on HEC. ASAP. (Maybe I should simplify myself by eliminating some acronyms!)

 

While those are very compelling examples of simplification, yet another is the recently launched Platform as a Service offering, HANA Cloud Platform from SAP. To be clear, HCP is a cloud based platform for everyone and anyone using any business application from SAP to build and extend their apps...regardless of whether you've deployed HANA on premise or not. It exists to simplify the customization of all business applications from SAP, starting with on premise ERP and Successfactors in the cloud. It's free to sign up and try out as well. It's awesome!

 

With HCP, all required software to build a solution, including user portal, analytics, data integration, identity, mobile, collaboration, API management, document management, data management and much more are pre-integrated and ready to run in the cloud. (In other words, "no assembly required"... just start making an app!) If you want to see real apps in action, talk to people like @_bgoerke, @prakashdarji and a number of SAP Mentors including @wombling who are doing a great job building apps such as “Enterprise Jungle socialgraph” using the HCP platform.


HCP.png

 

As you can see in the graphic, the idea is to make the HANA platform your broad, single product underpinning all business applications SAP delivers, both on premise and in the cloud. This implies our customers will have a more consistent experience as well, so we aren't pursuing this for the sake of speed or performance alone! Our broad intention with HCP (remember, HCP is Platform as a Service) is similar to HANA, but it will be focused on giving customers one simple way to customize all SAP business applications, whether you use or run HANA on premise or not. (Even if you never plan to use HANA, you can still use HCP!)

 

This model of simplifying business applications via HANA and labeling them "s" or simple solutions as well as enabling the extension of these SAP applications (or data) via the HANA Cloud Platform is a formula you will see SAP use repeatedly over the next several years.

 

3. Simplifying Your Experience & Consumption

Your experience and success with SAP is paramount. We need to build more solutions that you will love to use. It would also be helpful if we made them easier to access and discover. That's why we created solutions like Fiori and made it free for all existing SAP customers. Fiori literally transforms how people experience SAP. Beautiful, fast and mobile is exactly what you get with Fiori and the experience spans most SAP business applications. If we haven't "Fiori-ized" an SAP application yet, we will.

 

Of course the experience you deserve with SAP goes way beyond Fiori. We're encouraged by your feedback on Fiori (inspired even!) and have looked at how we can extend and expand that everywhere. With SAP Lumira for example, you can google it or visit the website (saplumira.com) and with one click, download an amazing data visualization tool - for free! The product makes even the ugliest of data sources pretty to look at and easy to understand.

 

On the subject of accessibility, you've also told us we need to make it easier to learn more about how to use our portfolio of products, ranging from ERP and HANA to Fiori and Lumira. Consequently, beyond making our tools just flat out easier to use, we've made free and low-cost, accessible education a top priority. We live in the age of Khan Academy, Coursera and Udacity – online education, both free and paid options is now the norm. Tools like SAP HANA Academy and SAP Learning Hub, are driving continuous education (remember the cloud world has frequent & rapid releases) whether you are in Silicon Valley, India, China or anywhere around the world.


We're also making it easier to discover amazing third party apps built on the SAP platform. There are thousands of amazing apps you never even knew existed built on SAP HANA and HANA Cloud Platform just waiting to be discovered, all in the SAP HANA Marketplace. I can't wait for you to see what we have in store for you at SAPPHIRE NOW 2015 regarding the marketplace and what's going on with developers building business apps on SAP's platform all over the world! It's awesome!

 

Lastly, we know that your success is a journey and your targets evolve over time. We want to make it easy to understand how to best leverage SAP so we can help you on that journey... so we've created journey maps that articulate in 5 simple steps how customers can explore customer success stories, identify use cases, try, deploy with easy cookbooks and experience our products. We've begun the journey with dozens of easy to access software evaluation editions via SAP HANA Marketplace, including BW on HANA, ERP on HANA, CRM on HANA, Cloud for Sales, and Customer Engagement Intelligence on HANA.

 

Wrapping this up...

We know your journey isn’t trivial. We know that no two SAP customers are alike. We know that many of our customers have different starting points like SAP ERP or SuccessFactors or BusinessObjects. We know that heterogeneity is the name of the game in the cloud (as well as on premise) and we have to collaborate with other vendors to make this a reality. This is why we’ve encouraged our customers to get started with HANA ASAP. Moving SAP on premise business applications to HANA puts you that much closer to additional strategic options like moving to the HANA Enterprise Cloud (our aforementioned IaaS) and creating a more seamless flow between other SAP solutions. It also gains you immediate entry to all the simplified apps & solutions SAP began rolling out with Simple Finance and will continue to do with myriad other products in the near future. Your organization needs to be on SAP HANA to capitalize on the influx of simple, lower TCO innovation opportunities coming your way. Once we arrive at our HANA waypoint together, we will be in a position to do amazing things, together.

 

I view all of this collectively as the pursuit of “The Cloud Company powered by HANA”. I believe Bill McDermott's proclamation to “Run Simple” is the way forward and we will endeavor to make this roadmap for simple a reality for you, every single day.

SAP pioneered in-memory database and application platform technology with the introduction of SAP HANA four years ago. While SAP HANA reached 4000+ customers, 1700+ startups and industry analysts acknowledge that in-memory database is not an option, disk based database vendors are playing catch up by adding in-memory caches to their vintage database systems. This means that right now, businesses are actively investigating which in-memory data management solution best fits their needs.

 

From a technology perspective, before we find ways to identify the right in-memory data management solution that fits your particular business needs, it is important to distinguish between the two different technical approaches used to deliver in-memory data management, In-memory First or Disk First.

 

SAP HANA uses the In-memory First approach where all data is maintained in-memory by default and moved to disk only when specified. This allows business users to get answers in real-time to any question at any level of detail, in a self-service manner with a data visualization tool and sufficient authorization to access the data. There is no need to find a DBA to move data to in-memory. SAP HANA supports transactions, analytics and advanced analytics on a single copy of in-memory first data.

 

Database vendors who are modernizing their disk-based databases to support in-memory data management use the Disk First approach. In this scenario, the disk-based database engine is augmented with a new technology layer, which often consists of a read-only, column-based in-memory cache - used for analytical workloads only.  Contrary to the in-memory first approach, databases that follow the disk-first approach keep data in disk unless a table or set of tables are explicitly configured as  in-memory tables. This means, business users and IT need to first determine which data has to be accessed in real-time. When the in-memory discovery is completed, IT configures the system memory to maintain data in columnar format for analytics in addition to maintaining data in row format for transactions. At runtime, the database has to absorb the overhead of keeping the two copies synchronized, and redirect queries and transactions to the appropriate data copies. Additional system resources are required to merge data from disk if not all data required by the application is in memory. When business users need to get answers in real-time from data that is still on the disk, response times become unpredictable and again requires IT intervention. 

 

With this technical understanding out of the way, what questions then should businesses ask their vendors to determine what in-memory solution better fit their needs? Here are my 5 questions: I have answered them for SAP HANA.

 

1. Does the system require lot of tuning to accelerate my applications?

As SAP HANA keeps all data in memory unless specified otherwise, applications have faster access to the data. Additionally, since SAP HANA supports both transactions and analytics on the same copy of the data, there is no system overhead for synchronizing data copies or redirecting workloads, thus eliminating the need for extensive tuning such as identifying the tables that are limiting the performance, altering them as in-memory tables and eliminating OLAP indexes in some cases. No code changes are required for standard SQL applications. Stored procedures require code changes due to lack of standards.

 

2. Can I achieve predictable response time for ad-hoc queries?

With SAP HANA, all the data is in memory by default and accessible in real time, delivering predictable response time for all queries – planned and unplanned - compared to the disk based database with in-memory cache. This is because reading data from the memory is faster and more predictable than reading data from the disk.

 

3. Can I get the full picture of my business with the necessary level of granularity, in real-time?

Yes, with SAP HANA you can slice and dice your data in a self-service manner and get answers in-real time because all data is in-memory by default – no configuration needed. By no configuration, I mean no need to change some tables as in-memory tables, create intermediate materialized views or create indexes. Also, since the same data copy is utilized for both transactions and analytics, your answers are always based on up-to-date, real-time information.

 

4. My needs for data discovery are always changing; do I need my DBA for every new question?

No, SAP HANA doesn’t require additional DBA time, because all hot data is always accessible in memory. So no matter what question you ask, you get an answer in a fast and predictable time.  Think of this as all your company information on Google.

 

5. My analytic application modifies data. Can I use an in-memory solution for this?

Yes, SAP HANA is a true ACID-compliant in-memory platform that performs both analytics and transactions on a single copy of data in memory. Separate copies are not maintained for transactions in row format and analytics in columnar format.

 

 

With Near-Line Storage(NLS) option and smart data access, customers can move cold data to a remote source and access this data using virtual tables and query federation. With dynamic tiering option in SPS09, warm data can be kept in the desk using extended tables. While these options give the flexibility to cost effectively manage very large data sets, accessing remote tables and tables in the disk will be obviously less preformat compared to the in-memory columnar tables.

 

 

Look for more questions in my next blog.


Learn more about SAP HANA.

Big Data is full of signals. You’re already collecting data, from multiple sources. What next?

 

A unique opportunity

Excitement is building for Australia's inaugural SAP tech conference, SAP Architect and Developer Summit in Sydney, Nov 20-21, 2014. Leveraging content produced for the SAP TechEd conference series, this event is hands-on training for technical professionals, led by local and international experts.

 

With a whole stream dedicated to learning how to launch a Big Data initiative, your questions will be answered in detail by two of SAP's top Data Scientists, Shailesh Tekurkar and Shantanu Goswami. SAP Data Scientists have helped more than 100 organisations globally to deliver break-through business results with Big Data, which they will share through a series of lectures and workshops.


Learn&&Code&&Connect

Of the 39 total sessions to choose from at the event, 14 examine SAP HANA features, add-ons, examples and integration with other solutions. However, you don’t need to be an SAP HANA customer or know the solution in detail to benefit from the event and answer questions like:


New skills, experiences and friends

Have you experienced ‘Design Thinking’? This is a powerful new way to solve problems and unlock potential, and most of all, it’s fun! Shailesh and Shantanu’s session on who’s successfully using Big Data around the world and how, will include Design Thinkinking to help you define your predict analytics strategy and jump start your Big Data road-map.

 

And if that leaves you wondering what the other 25 sessions are about, here’s a few clues:

So if you want to learn how to move your organisation from looking in the rear-view mirror to predicting new developments, capitalising on future trends, and responding to challenges before they happen, see you at:

SAP Architect and Developer Summit

Thursday 20 - Friday 21 November, 2014
Sydney, Australia
Cost: AUD 695.00

View the agenda         Register here          Watch the video          Twitter: @SAPANZ, #SAPdev

twitter-imge-developed-from-signature-440x220.jpg

Are you planning to migrate to SAP HANA? Here are a few tips that will help you come up with the most cost-optimized HANA landscape configuration for your migration.

Tip #1:  Size your application before you look at the SAP HANA server
Proper sizing of the SAP HANA server is very important. If you undersize the server - you will have performance issues, and if you oversize the server – you will pay for extra capacity that you do not need.
SAP has developed a comprehensive set of tools and resources to help customers with sizing their HANA workloads.  SAP Quick Sizer tool is a quick and easy way for customers to determine the CPU, memory, and SAPS requirements for running their workloads on SAP HANA.  Check the SAP HANA Sizing Overview document to quickly find what tools and resources to use to size the different HANA workloads, including:
  • Sizing New Applications, “ Initial Sizing” section -  provides an overview of the steps and resources for sizing stand- alone HANA, SAP Business Suite on HANA,  SAP HANA Enterprise Search, Industry Solutions powered by SAP HANA, SAP NetWeaver BW powered by SAP HANA, and other sizing guidelines for using HANA as a database.
  • Migrating to SAP HANA Applications, “Productive Sizing” section - contains the sizing guidelines to help you determine HANA system requirements when migrating your existing applications to HANA.
  • Sidecard Scenarios Sizing section - describes the sizing process for running SAP HANA Enterprise Search, CO-PA Accelerator, and other SAP Applications on SAP HANA in a sidecard scenario.

Tip #2:  Determine the deployment model for SAP HANA that’s best for your Data Center infrastructure strategy
The ever-expanding SAP partner ecosystem offers a wide range of HANA appliance reference architecture models optimally designed and built to satisfy each deployment use case. Customers can choose from more than four hundred SAP HANA certified configuration systems offering unprecedented scalability and fine-grain memory sizes ranging from 128GB to 12TB.  Besides the single server configurations, customers can easily create multi-node scale-out configurations by networking multiple nodes together to enable support for their largest SAP HANA requirements.

 

SAP knows that each customer’s set of requirements are unique, so you get to choose from a number of deployment options for SAP HANA to meet your every business need:
  1. The appliance delivery model provides an easy and fast way for deploying SAP HANA by leveraging preconfigured hardware set-up and preinstalled software packages fully supported by SAP.
  2. Tailored Data Center Integration (TDI) deployment model provides SAP HANA customers with increased flexibility and TCO savings by allowing them to leverage their existing hardware components (e.g. storage, networking) and operation processes. With TDI approach, HANA installation needs to be done by customer and customer aligns with each hardware partner on individual support model.
  3. Customers that have standardized on having their Data Center operations virtualized can leverage SAP HANA on VMware deployment model (currently in General Availability for single HANA VM and Controlled Availability for multiple HANA VMs).
  4. Finally, customers can choose the SAP HANA Enterprise Cloud service. This fully managed cloud service allows you to deploy, maintain, integrate, and extend in-memory applications from SAP in a private cloud environment—providing cloud elasticity and flexibility with subscription based pricing.
Additionally, SAP has recently expanded operating system options by providing support for SAP HANA on Red Hat Enterprise Linux in addition to its initial support for SUSE Linux Enterprise Server (SLES).This wide selection of configurations and deployment options empowers SAP HANA customers to achieve business performance and innovation while retaining their choice of IT architectures.

Tip #3:  Explore carefully your options for scale-up, before deciding to go with a scale-out deployment model.
Understand the benefits of scale-up before you decide to scale-out:
  • With a scale-up, single node model, you have 1 server (minimal footprint), 1 operating system to update/patch/upgrade, and 1 box to operate/manage and provide power supply for.
  • With a scale-out, multiple node approach – you not only need more room and more power in your Data Center to deploy multiple server boxes, but also your operational/management costs for cluster systems will be much higher than for the single-node systems. Although scale-out provides more hardware flexibility and requires less hardware costs initially, it requires more upfront knowledge about data, application, and hardware than scale-up.
In summary, always scale-up first and go to scale-out only if you really have to. Most customers find that SAP HANA’s high compression rate combined with its high scalability (up to 2TB - OLAP and up to 12 TB - OLTP) will easily meet their business requirements. SAP HANA scalability constantly grows, spurred on by advances in multicore technologies, providing new ways to meet the most demanding scalability requirements of our largest customers.

Tip #4:  Fully understand extra options available for reducing the cost of your non-production systems
Carefully review SAP HANA TCO savings for non-production use cases to determine the most cost-efficient approach for your non-productive landscape.  The cost of DEV/QA hardware can represent a significant portion of the total cost of your SAP HANA system landscape. In a typical SAP landscape, for each PROD system there can be anywhere between 2 to 9 corresponding DEV/QA boxes.
SAP always had less stringent hardware requirements for SAP HANA non-production systems. For example, the sizing guidelines for non-production systems allowed customers to use a 2x times higher core-to-memory ratio than the one used for production systems.
This summer, SAP made additional steps to further relax hardware requirements resulting in potentially significant new cost savings for customers.  For more details, see this blog: Cost-Optimized SAP HANA Infrastructure for Non-Production Usage.

Tip #5: Chose the right option among various High Availability/Disaster Recovery (HA/DR) models including the choice of cost-optimized vs performance-optimized System Replication
When it comes to SAP HANA total cost of ownership (TCO), one of the main cost-drivers is related to lowering the management and operations costs by designing an efficient landscape.  In order to minimize IT costs, customers can use the servers on the secondary system for non-productive SAP HANA systems.
SAP HANA System Replication solution used in the HA/DR landscape design can be configured for either the cost-optimized or the performance-optimized mode of operation. One of the roles of the SAP HANA System Replication (SR) capability is to prepare the secondary hardware for shortening take-overs and performance ramps after a system failover. In a cost-optimized mode of operation the hardware on secondary site is used for DEV/QA purposes, so the system recovery time will take longer (since data cannot be pre-loaded into the main memory of the secondary node in preparation for take-over). By enabling customers to use their secondary hardware resources for non-production, SAP HANA helps drive down TCO by allowing customers to consolidate their environment.

Tip #6: Request a quote from at least two SAP HANA technology partners once you have finalized your landscape design
Once you know the sizing (CPU, memory, SAPS), HA/DR configuration, and deployment model for your applications, make sure to request a quote from at least two certified SAP HANA  appliance and storage partners – even if you already have a preferred hardware partner.  SAP HANA vendors are ready to price their offerings competitively – getting multiple quotes can help you with negotiating the best price for your deployment infrastructure.

Tip #7: Before you go live, validate and take advantage of services offered with your software and hardware licenses
Always make sure to run some additional testing before you go live. Customers often find that their requirements have changed during the project lifetime (scope creep), so make sure to re-validate your initial SAP HANA system sizings. First run the stress and performance tests to optimize the KPIs for your production workloads before you go into production. You will also want to take advantage of SAP HANA go-live-checks and any other service offered in your SAP support license (see HANA GTM Resources & Sales Community / Content / SAP HANA Services ) or by your hardware partner.

Conclusion Tip:  With SAP HANA, you get many choices to help you Simplify Your IT and Lower TCO
SAP HANA provides customers with a variety of configuration and deployment choices to meet every businesses need and budget. The tips above will help you maximize your IT and achieve increased economics out of your SAP HANA infrastructure and resources.
High flexibility and openness is at the core of SAP HANA strategy for helping customers lower their TCO and maximize performance and efficiency. According to the  recent Forrester Study: “Cost Savings with SAP HANA”, organizations opting to implement SAP HANA could expect to see their software costs fall by more than 70 percent, their hardware costs by 15 percent, and their administration and development costs by 20 percent!
So I'll just wrap it up by saying this: Join the rapidly growing SAP HANA customer base and follow the tips above when deploying HANA into your data centers to rapidly transform your business and maximize your ROI.

Stay tuned for more exciting updates, and thanks for visiting!

Maybe you remember this Mercedes-AMG cameo appearance in the movie, Transforms Dark of the Moon.

Mercedes AMG.jpg

Driven by supermodel and actress, Rosie Huntington Whiteley, this equally attractive speed demon, Mercedes-Benz SLS AMG, transformed into mechanical giant, Soundwave the Decepticon.

 

Mercedes AMG, the performance division of Mercedes-Benz, creates some of the most sought-after vehicles in the world.  Each engine is hand built from start to finish with utmost craftsmanship, making the brand a staple for speed and high status.

 

High Performance. High Quality.

 

Priding itself for high-performance and gold-standard cars, Mercedes-AMG is careful to test each engine thoroughly. There’s just one problem.  Engine testing is a costly and data-intensive process. And while most engine failures occur within minutes, failed tests cannot be identified until the hour-long process is completed, wasting time and money.

 

Mercedes-AMG engineering employees expressed their strong desire for real-time data analytics to improve the engine production process, and recently, parent company, Daimler AG, decided to make it happen.

 

As a trial to determine potential companywide benefits, Mercedes-AMG piloted a real-time quality assurance platform, deployed by SAP Business Suite powered by SAP HANA, that harnessed the Internet of Things and SAP Predictive Analysis software to optimize engine-testing processes when manufacturing its vehicles.

 

Now, an engine test showing unusual engine behavior can be stopped and ended at any time during the test procedure, and results are sent directly to engineers via tablets for faster resolution.

 

The Result


With its new system, testing run time on unsuccessful engines is 94% faster than before, and therefore Mercedes-AMG gained the equivalent of an extra day of testing capacity each week. Mercedes-AMG lowered internal costs, and engineers and customers alike are happier because now there is more time to focus on refining engine quality and vehicle customization.

 

In an interview with Forbes Insights, Dirk Zeller, head of IT Consulting at Mercedes-AMG, explained that this process “leads to more insights faster, as we compare more data and use complex analytics without losing time.”

 

Continued Growth


In the future, Reinhard Breyer, CIO of Mercedes-AMG GmbH, explained that, “This breakthrough innovation is just the start.  Ultimately we want to monitor engine performance in customer vehicles.”

 

Watch below for full video interview:

 

 

After implementing this new simplified IT system, Mercedes-AMG experienced their most successful year in history, selling over 32,000 automobiles in 2013.

 

For more information about Mercedes-AMG and the full range of solutions provided by SAP, check out:

 

Forbes Insights Case Study

 

Mercedes-AMG Business Transformation Study

 

Video interview with Reinhard Breyer, CIO, Mercedes-AMG.

 

Let’s race.  Follow me on Twitter and LinkedIn.

Read the original blog post here.

On April 10th this year we had an extremely successful HANA Group event at SAP Palo Alto called ‘HANA Today’ with over 260 attendees globally. Now we want to repeat that and do it better.


Due to a large group pent-up demand about new developments on IoT (Internet of Things) and IoE (Internet of Everything), we are holding our ‘HANA4IoT’ event on November 11th, 2014 again at SAP Palo Alto, CA, in the COIL Lab Building 1.   Don't miss this upcoming event !!


Our sponsor this time is Cisco - the global leader in IoT communication and network devices. Cisco connects the unconnected with open standard, integrated architecture from the cloud to end devices – with exceptional reliability and security.  Cisco is a SAP certified unified HANA Hardware partner, and now fastest growing server provider with their UCS servers for SAP HANA.  Cisco is also a SAP HANA customer; they reap benefits of SAP HANA to get dynamic insights for their sales executives.  So, we are in great company for topics on IoT.   The event, as before, will be from 1PM to 5PM. Same spot. This will be followed by a one-hour Happy Hour for networking, including light snacks and drinks.


You need to do two things immediately:


FIRST: Your need to be a member in the SAP HANA Group for this event. This is SAP’s official SAPHANA Social Networking group. If not please register here: In-Memory SAP HANA | LinkedIn


SECOND: REGISTER early for two reasons. Firstly, we only have 70 seats at the COIL Lab in SAP Palo Alto. It’s going to be a priority-based confirmation. Confirmation for Physical attendance will be sent prior to the event. Secondly, the SAP Connect webcast link will only be sent to “Registered” attendees. So whether you are physically with us or remotely participating, you can only attend by registering.

 

URL: https://docs.google.com/forms/d/15a6WT0U9OGngLsCV02HIsptT_H_XmiDwuSXjSJAQIDg/viewform?usp=send_form

 

KEEP TUNED FOR MORE DEVELOPMENTS, and plan to attend live at Palo Alto or remotely from wherever you are on the planet (which is getting smaller every day). Details will be communicated only to registered attendees from this point forward.


FINALLY: Attend the event, physically at Palo Alto or remotely from anywhere on the globe via the online connect session. Instructions (URL) for joining online will be sent with your registration confirmation.

 

Here is our AGENDA: 

 

Start

End

Topic

Who

12:30PM

1:00PM

Arrive/ registration/Coffee

Registration

1:00PM

1:15PM

Welcome and start up

Scott Feldman/ Hari Guleria

1:15

1:45

Keynote –SAP HANA Strategy

Prakash Darji

1:45

2:15

SAP IoT Strategy

Yuvaraj Athur Raghuvir

2:15

2:45

SAP HANA –SP 9 & Cloud Updates

Sap Prod Mgmt

2:45

3:00

Coffee and drinks break

<break>

3:00

3:30

HANA 4 IoT – Competitive Differentiator

Hari Guleria

3:30

4:00

Suite on HANA- Current Developments

Amr El Meleegy

4:00

4:30

Smart Mining – HANA4IoT Demo

Cisco

4:30

5:00

SAP HANA Q&A Session to Speakers

(Speakers)

5:00

5:15

Wrap Up - Floor Drawings and prizes (Must be present to win)

5:15

6:00

HAPPY HOUR for Networking with Wine, Beer and bites



Any questions, contact: hari.guleria@pridevel.com or scott.feldman@sap.com

Follow us on Twitter:  @sfeldman0 @HariGuleria @SAPInMemory

Keep an eye out for: #SAPHANA and #IoT


**Post-event update can be found here: SAP HANA 4 IoT Post-Event Updates


We all like innovation. We always see an opportunity for improvement, whether it’s in a product or in business process. Actually the economy is based on a permanent strive for change and the worst for it is stagnation. From time to time we experience massive technology changes and they are disruptive, sometimes very disruptive.


Think about ocean liners replaced by jets, break bulk freighters by container ships, land line phones by cellular phones, disk databases by in memory databases. The replacement technology allows for completely a different schedule, cost, flexibility or in case of the databases new business processes. What do we do with the existing infrastructure? Can it continue to be used? Can it be refurbished?


What happened to the ocean liners - they became hotels or cruise ships, but only for a short time. yes, some freighters were converted in the early days of container logistics, but again they didn't fit in the long run. And how did our life change with cellular phones. People don't remember any more the lines in front of a telephone booth at the airport or on a busy street. So, the changes will happen, they are part of the innovation process. To fight the changes is counterproductive, it costs extra energy and may not even not even work.

 

Despite all that SAP talked about non disruption as a strategy. Other IT companies claim they promised 30 years ago full upward compatibility and can show to keep the promise. Let’s have a look where these strategies make sense and where they will fail.

There are currently a few mega trends in IT, changing the way it supports business:

  1. SaaS, applications running in the cloud and offered as a service. Completely new are the generic shared service ones, like marketplaces,    business networks, etc providing the same set of services to many clients while connecting trading partners. Also more traditional enterprise applications are offered as a complete service and run for each client separately, despite sharing some system services through multi tenancy concepts to reduce cost, pretty much like the shared services in an apartment building. 
  2. The IoT (internet of things) will flood us with data coming from a myriad of sensors to report the well being or problems of expensive machinery. What was already yesterday standard for aircrafts, we will soon see in drilling machines or even toasters.
  3. The sprawling social networks have become a basic part of our live and as such give a testimonial about what we like, don’t like (remember thumbs up/down) and have become a vital source of information for business.
  4. On a much smaller scale, because it’s happening inside the applications, we see how in memory databases replace disk based ones at a rapid pace.

 

 

How does SAP play out the ‘non disruption’ strategy when faced by these mega trends? If you want to deal with textual data, digest billions of sensor messages and be able to work as an SaaS application in the cloud, SAP opted for a completely new platform for it’s enterprise applications. HANA is not only an in memory database using columnar store instead of row store but offers libraries for business functions, predictive math algorithms, OLTP and OLAP functionality in one system, distributed data management for data marts or IoT solutions.

 

 

Technology wise HANA is truly disruptive but that doesn’t mean everything has to change at least not instantly. Let’s have a look at the ERP system of SAP. Being a success story for over 20 years, thousands of companies have invested billions to set up the system, maintain it over the years and developed customer specific add ons for competitive advantage. There is a tremendous business value captured in the system configuration and the captured data. SAP kept both intact moving forward from anydb to HANA. No data will be lost and the configuration parameters stay intact. Thanks to one of the great standards in IT, the SQL interface, all programs  can continue to run unchanged. That’s the first step and guarantees a smooth transition from anydb to HANA. But HANA is disruptive and the unbelievable speed improvements allow us to drop some concepts of the 90ties to guarantee short response times. In sERP SAP could show that the transactional update of hierarchical aggregates as introduced in the days of MIS (management information system) are not necessary any more. Instead any kind of aggregation for reporting or analytical purposes is now happening on demand. Also the various replicas of transactional data in different sorting sequences are no longer a performance benefit. Once running on HANA all the programs accessing those data objects have to be changed. But it’s happening semi automatic. The old data structures are replaced by SQL views with the identical functionality and a similar name. The programs continue to run without any further change. Now we can drop the redundant data structures and gain a 2x reduction in the overall data footprint.


Now the question is, shall we stop here, or do we continue to take advantage of the new properties of a columnar store in memory? The traditional aggregation rules, implemented as a program maintaining the totals are more than 20 years old and not very important any more. Now many different selections, aggregations, comparison, predictions are possible because the transactional data is kept at lowest level of granularity and all further processing happens in algorithms on demand and not as part of a transaction any more. New programs will be added and supercede the old functionality but they come in parallel and as such continue to support the ‘non disruptive’ paradigm. There is a disadvantage with this strategy -  it takes more time. But it’s worth it. All customers can move gradually forward, keeping major accomplishments of the past unchanged. A similar approach is being used for the introduction of the new UI. The FIORI apps drop in in parallel and the user or user groups have time to adjust to the new layout and interaction model.

 

More radical changes come as an option. All enterprise applications will go to the cloud and be managed by the service provider. The system maintenance will accelerate significantly. The dramatic reduction in complexity of the data model and the removal of technically more critical update tasks let to a system in which data inserts and read only data retrieval dominate. When less data changes are happening in a system the stability and availability of the system increase. Most of the application components are by definition read only and can therefore join the system at any time. The dependency of code and data is now at a completely different level. This is a prerequisite for a successful use as SaaS.

 

It sounds surprising that the change to the HANA platform is the bases for the advances, but that was always the idea of platforms. They over services all applications need and shield them from the ongoing changes in technology. The final product, sERP looks and feels fundamentally different, solves problems which were unthinkable yesterday and is still carrying the business configuration and the enterprise data to the future nearly without changes.    

 

The reduction of the complexity of transactional data has even more dramatic consequences. We see now a reduction in the data footprint of 10 - 20x while keeping all data in memory. If we split data into actual (necessary to conduct and document business) and historical (no changes allowed any more) we can further optimize the database processes and reduce the amount of data kept in memory.

 

There were two reasons to split up enterprise systems into erp, crm, srm, scm, plm, hcm as transactional systems and a separate business data warehouse. First the sheer size of the systems outgrow the single computer capacities and we split them up. Second, once we had independent subsystems, we could develop them at different speed using different technologies. Having them all moved to a brand new platform, the HANA platform, both the size and the speed argument are not valid any more. All systems can be reintegrated now, eliminating the enormous data transfer between the systems. The management of one single system with the above components is times easier and less costly, especially considering the advances in maintenance strategy as mentioned above. The separate data warehouse still has value but many of the operational reporting and analytics can now come back to the transactional system. Capacity concerns aren’t any longer valid, the replication of the actual data partition is the answer and contributes to HA (high availability) one the other hand.

 

Running in the cloud it becomes much easier to integrate the simplified business suite with other services in the cloud. The future enterprise solutions will make use of all the generic business services like ARIBA, Concur, Fieldglass, Success Factors and many others. The last question  is, will eventually everything run in the cloud? No, but it will run first in the cloud. There is no principal limitation for cloud software to run on premise. The financial terms may be different, the maintenance rythm will be different  but all innovation finally will spill down to the on premise versions, even, if technically viable, including the ones on non HANA platforms.

 

We do mitigate the consequences of disruptive innovation, but we do not continue to carry on the past forever as nobody boards an ocean liner any more to go new york, or lives without a cellular phone, or ships cargo as a discrete item. We carry forward the established business processes, complement them with new ones and finally phase some of  them out.

 

By the way this all should happen without any downtime for the business.

Do you remember all the times you stored the results of a database query in addition to the original data for performance reasons? Then you probably also recall the significant drawbacks that go along with these so-called materialized views: they introduce data redundancy that makes your data model more complex, requires additional database operations to ensure consistency, and increases the database footprint. In this blog post, we demonstrate that, thanks to SAP HANA’s unique in-memory technology, you can simplify your data model by getting rid of materialized views.

 

This is the second part of our deep dive series on SAP Simple Finance, SAP’s next-generation Financials solution. In the first part, we show how SAP Simple Finance uses the capabilities of SAP HANA to simplify Financials and deliver non-disruptive innovation by removing redundancy. This brings significant benefits: the core data model is as simple as possible with two tables for accounting documents and line items, the database footprint shrinks by orders of magnitude, and the transactional throughput more than doubles.

 

The benefits are convincing and SAP Simple Finance demonstrates that it can be done. You may ask yourself how this is technically possible and whether you can take the same approach for your applications by running on SAP HANA. The following paragraphs summarize our answers and the longer article below gives more details. Furthermore, the next blog post in the deep dive series will explore the specific case of materialized aggregates, which refer to redundantly stored aggregation results.

 

The following example shows the motivation for materialized views in traditional database systems: You have an SQL query that selects database rows based on several parameters, for example, all open items for a particular customer. Executing this query against a large base table requires scanning through the whole table of all accounting document line items in order to find the rows that match the selection criterion. In a traditional, disk-based database system, this may be too slow for practical purposes. The alternative is building up a materialized view that explicitly stores the smaller subset of open items and is constantly updated. When querying open items for a particular customer, the database then only needs to scan through the smaller materialized view, resulting in a sufficiently fast response time also on disk-based database systems.

In view of the significant drawbacks of materialized views, the goal is to replace materialized views with on-the-fly calculation. The numerous benefits of getting rid of materialization include an entirely new level of flexibility, increased throughput, and simplicity (for more details, see the long article). The costs of doing so are actually minor, as we outline below: in fact, in-memory response times of on-the-fly calculated queries are typically faster than queries against materialized views on a disk-based database. As illustrated in Figure 1, this tips the seesaw in favor of removing materialized views.

 

blog_simplefinance_v2.1_figure1-cropped.png

Figure 1: Replacing Materialized Views with on-the-fly calculation

 

Looking at in-memory database systems only, materialized views are almost never necessary nor beneficial thanks to the superior performance. We show below that in-memory technology shifts the break-even point in a way that materialization is only beneficial in rare circumstances of highly selective queries. A single core of a CPU is able to scan 1 billion line items in less than half a second. In the same time, a disk-based system could only access 50 random disk locations (based on a latency of 10 ms).

 

In line with this reasoning, SAP Simple Finance took the opportunity offered by SAP HANA and removed materialized views from the data model: tables such as all open Accounts Receivable line items (BSID) have been replaced non-disruptively by compatibility views calculated on-the-fly (for details on all changes, see the first part of this series). The same applies to materialized aggregates such as total balance amounts for each customer per fiscal period (KNC1). Hence, in the next part of the series, we continue our deep dive by looking at queries that include aggregation functions and how they can be tackled similarly.

 

This blog post continues after the break with an in-detail look at the points that we have summarized so far. We first look at the concept and maintenance of materialized views. Afterwards, we investigate the implications of materializing views and provide decision support to get rid of materialization.

 

 

 

 

In the following, we first always consider in-memory database systems only and the new opportunities they enable when considering whether to materialize or not. The comparison of in-memory to disk-based database systems is then considered separately. Simply accessing a pre-computed value will always be faster than computing it by running over multiple tuples, even in an in-memory database. The difference is that with the speed of in-memory technology it has now become feasible to dispense of the materialization, because computation on-the-fly is fast enough in most cases, especially compared to traditional disk-based database systems and typical disk latencies of 10 ms. We investigate the situations where systems can dispense of materializing views or aggregates thanks to the speed of SAP HANA and show that materialized views or aggregates are unnecessary – and, thus, harmful – in almost all scenarios.

 

The Concept of Materialized Views and Their Maintenance

A view represents the result of a stored query on the database. Essentially, it is a named SQL query that can be queried like any table of the database. In the following, we focus on the case of a single base table with arbitrary selection conditions. The following assumes a query with projection and selection, but does not consider joins of tables as the base relation for reasons of simplicity. We neglect aggregation in this section, so that a view always references a subset of tuples from a base relation according to a query.

 

A materialized view explicitly stores copies of the corresponding tuples in the database (Gupta and Mumick: “Maintenance of Materialized Views: Problems, Techniques, and Applications”; IEEE Data Eng. Bull., 18(2); 1995). In the absence of aggregates, materialized views have the same granularity level as the base tables. If the query also aggregates the items, we speak of materialized aggregates – they will be covered in detail in the next blog post. In contrast to a simple index on a table column, a materialized view describes semantic information, as the selection criteria can be more complex than a simple indexation by one value.

 

If an item matches the condition of the materialized view, those properties of the item that are part of the view’s projection are redundantly stored. Whenever the base tables are modified, it may be necessary to modify the materialized view as well, depending on the modified tuples and the view’s selection criteria. There are several cases to consider:

  • Inserting a new tuple into a base table that matches the criteria of the materialized view requires inserting it into the materialized view.
  • As part of an update of a base table, a change of a tuple’s properties does not only have to be propagated to copies of the tuple (update operation), but may also result in the whole tuple now being newly included or excluded (insert / delete) if the new values of some properties changes the value of the materialized view’s selection criterion.
  • When deleting a tuple from the base table, all copies in materialized views have to be deleted as well.

 

In summary, each materialized view leads to additional database operations whenever data in the base table is being modified. Instead of just modifying the base table, additional operations are required to maintain consistency in the view of redundant data. In a system with several materialized views, each transaction may require several times more modifying database operations than would be necessary to just record the change itself in the base table. This lowers the transactional throughput of the whole system as the number of costly modifying operations increases and locking frequently leads to contention.

 

This also applies in case of lazy materialization. A lazy maintenance strategy only modifies materialized views when they are accessed (Zhou, Larson, and Elmongui: “Lazy maintenance of materialized views”; in: Proceedings of VLDB 2007). However, in the typical OLTP workload of an enterprise system, both modifying transactions and reading queries happen so frequently and intermingled that the number of additional operations due to materialization remains the same: almost all transactional modifications will be followed by queries accessing the materialized views that require propagating the modifications.

 

Hence, materialized view maintenance adds to the operational complexity of a database system, requires additional modifying operations and lowers the overall system transactional throughput. Furthermore, there is a cost associated to the additional storage that is required for the redundant data which can be substantial in size. These drawbacks have to be balanced against the main benefit of a materialized view: the increased performance of queries against the view as the query underlying the view does not have to be evaluated on each access.

 

Implications of Materialized Views on Database Size and Query Performance

A materialized view on one base table (which acts as a sophisticated index) will always be smaller than the base table in terms of number of tuples and overall size (or equally large in case of an exact duplicate). However, in case of multiple materialized views on the same base table that are not mutually exclusive the overall size of materialized views in a database schema can be larger than the base tables. The more and more materialized views have been added for performance reasons in the past, the more storage space is taken up by the redundant data.

The drawbacks of a materialized view can thus be summarized as follows:

  • Reduced throughput due to the overhead on each update, insert, or delete.
  • Increased storage space for the materialized redundant data.

 

This has to be weighed against the potential impact on performance. The following calculations will show that the shift to in-memory technology diminishes the difference in performance between materialization and on-the-fly calculation, making the former much less worthwhile.

 

Let us assume that the base table contains n tuples, of which a given view selects m through its selection condition. These m tuples would be stored redundantly in a materialized view. The ratio  describes the selectivity of a query. The higher this factor, the more selective the query is. Any query that accesses the view will usually apply further selections on top.

 

Two (inter-related) factors influence the performance impact of a materialized view for such queries on an in-memory column store. The impact will be even larger when compared to a traditional, disk-based row store.

  1. Already materialized result: The materialized view has already applied the selection criteria against the base table and thus queries accessing the materialized view do not perform the column scans that identify the m tuples of the view out of all n tuples of the base table again.
  2. Smaller base for selections: The additional selection of queries directly operates on the smaller set of records as the result of the view has been physically stored in the database. That is, the necessary column scans operate on attribute vectors that contain entries for m instead of n tuples. The smaller input relation influences performance proportional to the selectivity factor .

 

In both cases, the extent of the performance impact of a materialized view depends on the ratio of n to m. On an abstract level, the operations necessary for a query with and without materialized view can be compared as follows – again, both times looking at an in-memory database:

  • Without a materialized view, the response time will be proportional to n, as all full column scans will operate on attribute vectors with n entries.
  • With a materialized view in place, the response time of a query will be proportional to m, the smaller number of entries contained in the materialized view.

 

Influence of Selection Criteria

 

In addition to the number of tuples in base table (n) and view (m), let’s furthermore assume that the selection of the view depends on c different columns. Any query that accesses the view may apply further selections on top, taking into account d columns for the selection; e columns thereof have not already been part of the initial view selection.

 

With regard to the two factors outlined in the main part, the selection criteria then have the following effect:

  1. Already materialized: Assuming that independent, possibly parallel column scans are the fastest access path due to the data characteristics, the materialization already covers the scans over the c columns, each with n entries, that are part of the view selection.
  2. Smaller base: With a materialized view, the d additional selections of queries require d column scans on columns with m entries, instead of n entries without materialized views.

 

When now comparing the situation with and without materialization, it has to be kept in mind that in the absence of a materialized view, some of the additional selections overlap with the view selection criteria and can be combined into a single column scan. Hence, only e additional scans besides the c attributes are necessary (but, of course, on a larger set of data).

 

Query against materialized view

Query without materialized view

d column scans, each with m entries (assuming independent, possibly parallel access)

(c+e) column scans, each with n entries

Proportional to d × m

Proportional to (c+e) × n (c+ed, nm)

 

For deciding whether to materialize a certain view, the difference in the number of columns to consider for the selection (d vs c+e) is significantly smaller and has thus less influence on the performance compared to the difference in the number of entries to scan (n vs m). In turn, the selectivity factor remains most important.

In summary, the performance will only improve by the selectivity factor . The more detailed calculations in the side bar also take into account the selection criteria and show that the selectivity still is the most important factor.


In addition to restricting the number of tuples to consider, a materialized view may also include only a subset of the base columns. However, for the performance of queries in a columnar store, it does not matter how many columns from the base relation are projected for the materialized view: in contrast to a row store, each column is stored entirely separate. Adding more columns to the materialized view does not impact the performance of any query on the materialized view if the query explicitly lists the columns in its projection (which should be the case for all queries, as SELECT * queries are detrimental to performance in row and column stores alike, besides other disadvantages such as missing transparency of source code). Duplicating more columns does, of course, increase the storage size. In general, a materialized view should encompass all columns that are relevant to the use case in order to increase its usefulness, because the materialized value can only be accessed by a query if all required columns have been materialized. In turn, keeping redundant data thus gets more costly in terms of required storage and complexity of modifications.

 

Decision Support – To Materialize or Not To Materialize?

The linear cost model described above has long been used in materialization discussions and has been confirmed experimentally (see Harinarayan, Rajaraman, and Ullman: “Implementing Data Cubes Efficiently”; in: Proceedings of SIGMOD 1996). It is especially suitable for columnar in-memory database system, because these store the entries of each column sequentially.

 

The first step when deciding whether to materialize or not in a columnar in-memory database thus consists of analyzing the selectivity of the query underlying the view. Based on the above, a materialized view may be reasonable performance-wise only if the following two criteria were fulfilled:

  1. Absolute measure: Does the performance without materialization not meet expected requirements?

    In an in-memory database system such as SAP HANA, queries run much faster than in a traditional database system (see Real HANA Performance Test Benchmarks). This means that many queries with a previously bad performance perform sufficiently fast enough in an in-memory database and therefore require no further optimizations (such as materialization). For example, imagine a view on a table with 1 billion line items. Each entry in the column of the selection criterion takes up 2 byte (after dictionary compression). Scanning the whole column of 1907 MB takes less than half a second using a single core, assuming a memory processing speed of 4 MB per ms per core (1907 MB divided by 4MB/ms per core = 477 ms per core). Even with only four cores, which is nowadays commodity hardware, 8 different attributes could be scanned in parallel in still under a second without any materialization.

  2. Relative measure: Is the performance with materialization significantly better than without?

    Even if according to the absolute considerations a speed-up would be beneficial, the performance would still have to be compared and the potential performance advantage traded off with the disadvantages of materialization (mostly lowered throughput and increased database size).

    The performance savings will be proportional to the selectivity factor . If m is not orders of magnitude smaller, but for example only 10% of the base size, materializing will thus not yield significant savings. Instead, other means to increase the performance would be necessary.

    The additionally required storage is proportional to m/n-th of the base table. A large share of columns will typically be replicated in this scenario in order to not restrict the usefulness of the materialized view. For example, the materialized view of open customer items in SAP ERP Financials (BSID) replicated half of the columns of the accounting document line items table BSEG.

 

In summary, the need for materialized views as described above vanishes with in-memory columnar databases. Materialization is simply not needed to provide fast access similar to an index. Figure 2 (repeated from above) highlights why eliminating materialized views is preferable now: the impact on response times compared to accessing a materialized view is less significant as in-memory technology reduces the overall performance. This is done in a non-disruptive way by instead providing a non-materialized compatibility view that represents the same query as the former materialized view, but is calculated on-the-fly. Applications seamlessly access this virtual view without requiring any modifications. We already explained the topic in a corresponding chapter of our last blog post and will dive deeper in a future blog post.

 

blog_simplefinance_v2.1_figure1-cropped.png

Figure 2: Replacing Materialized Views with on-the-fly calculation

 

The break-even point at which a materialized view becomes beneficial for performance reasons is reached much later in terms of the selectivity of the underlying query. For view queries with low selectivity, a materialized view constitutes almost pure overhead because the performance without materialization is nearly the same and, moreover, acceptable in absolute terms. The benefit of materialization gradually increases with the selectivity. However, the benefit in terms of performance – depicted in Figure 3 below as the distance between the lines of the in-memory scenario – has to be balanced against the cost.

blog_simplefinance_v2.1_figure2.png

Figure 3: Query performance depending on selectivity

 

Not relying on a materialized view improves flexibility, increases the transactional throughput, lowers complexity, and reduces storage costs. Additionally, the performance impact of materialization as experienced by users diminishes with in-memory technology. The effect on the break-even point beyond which the benefit of a materialized view outweighs its costs is depicted in the following Figure 4. With the move from traditional, disk-based database systems to in-memory systems, even the most selective queries do not sufficiently benefit from materialization to outweigh the costs.

blog_simplefinance_v2.1_figure3.png

Figure 4: Shift of break-even point of materialization thanks to in-memory database

 

The above reasoning also holds true when looking at the complexity of queries instead of (or in addition to) selectivity: even in case of the most complex queries does the performance impact of materialization no longer outweigh the costs.

 

The case of SAP Simple Finance demonstrates these points in more detail, as SAP Simple Finance removes materialized views in an entirely non-disruptive manner. It demonstrates that above calculations on the feasibility of removing materialized views indeed apply in practice. In an example SAP system, BSID (open Accounts Receivable line items) contains roughly every 300th item from the base table BSEG. Even for this already moderate selectivity, removing the materialized view has been feasible. Each query on BSID now transparently accesses the corresponding compatibility view so that the entire result is calculated on-the-fly.

 

The second building block of the removal of materialized redundancy in SAP Simple Finance is the replacement of materialized aggregates, which we will discuss in the next blog post.

10 days before SAP TechEd, Steve Lucas called and asked if we could replicate the Wikipedia page views demo that Oracle produced during their Annual OpenWorld in early October. For those who haven't seen it, it is a download of the 250bn rows of Page view statistics for Wikimedia projects, which are stored in hourly files since 2007. People always ask how we got the same dataset - it's publicly available at the link above.

 

There were two real challenges - first, my math calculated that we needed roughly 6TB of DRAM just to store the data, and second that we had to download, provision, and load 30TB of flat files in just 10 days, and replicate the app.

 

To solve the first problem, the folks at SGI offered to build a SGI UV300H scale-up appliance with 32 sockets, 480 cores and 12TB of DRAM. This configuration would normally come with 24TB of DRAM but we didn't need that much for this demo, so we downsized slightly. Once I knew we had this appliance secured, any concerns about performance evaporated, because I knew this appliance could do 1.4 trillion scans/sec, which is around 5x what Oracle demonstrated on their SuperCluster.

 

For the second problem, the folks at Verizon FIOS came out the next day and upgraded equipment so we could get internet fast enough to get the data down onto USB2 had disks, which we promptly shipped to SGI's office in Illinois. Thanks Verizon!

 

This would probably be a great time for you to go ahead and watch the video, so go right ahead to see what we built in 3 days!



Response time at the end-user

 

As Steve rightly points out, there are some super-smart people at Oracle, but the first thing that got me about their demo was that the response time on the web interface seemed to be quite slow: 4-5 seconds. Despite this they claim sub-second response times, so I assume they are measuring response time at the database and not at the end user.

 

For the HANA example, we look at performance in Google Chrome Developer Tools because that's what users experience - the time from button click, to graph. And because HANA is an integrated platform, we see performance that - to my eye - crushes Oracle's $4m SuperCluster with a system a fraction of the cost and complexity.

 

In my testing, we regularly saw 3-400ms response time but we sought to mimic how customers use systems in the real world, so we ran the SGI system in their lab and connected from the laptop in the keynote theatre over the internet - that's over 1750 miles away. That distance is around 50ms round trip at the speed of light, so raw physics has an impact on our demo performance!

 

Simplicity and Agility

 

HANA has a number of features that make a demo like this possible in a short period of time, and those features are just as useful to developers in the real world.

 

First, almost no model optimization is required. The model design was completed in a few minutes. This is very significant - some databases are very sensitive to model design, but it was just necessary to follow simple best practices on HANA.


Second, HANA self-optimizes in several critical ways. For a start it automatically defines a table sort order and sorts the columns according to this. It will also define (and re-define) the best compression algorithm for the exact data in each column. When the database is quiet, you will often see little background jobs that optimize tables - and table sizes will decrease.

 

Third, HANA information views allow you to graphically define models, predicative algorithms and other sophisticated data access mechanisms. These allow you to control how end-users access the information.

 

If you contrast this with Oracle 12c in-Memory, Oracle is a real pain in the butt. You have to define compression and in-memory setting for every table and column, and you have to ensure the row store is sorted, because the column store can't sort itself (column store caches are built on start-up as a copy of the row store). It is a maintenance headache.

 

HANA as an integrated platform

 

The most significant benefit that HANA brings for these scenarios is that it collapses all the layers of the app into one in-memory appliance. The database, modeling, integration and web layers all sit within the HANA software stack and are one piece of software that comes pre-configured out the box. That's one of the reasons why we can build a demo like this in just a few days, but it's also the reason why it is so screamingly fast.

 

This is a pretty big dataset, so we see 4-500ms response times, but for smaller datasets we often get 10-30ms response times for Web Services at the browser, and that provides what I would call an amazing user experience.

 

HANA's web server includes a variant of the OpenUI5 SDK and we used this to build the apps. It provides a consumer-grade user experience and cuts the build time of complex apps.

 

Final Words

 

Building a demo like this in 10 days was a logistical feat by any standards, but I don't think we could have done it on a database other than HANA. The agility, simplicity and performance of HANA, made this possible at all. The integrated platform aspects of HANA meant that it was possible not only to show HANA providing a differentiating user experience, but also possible to extend the demo with predictive algorithms in the short time available.

 

Since we're passionate about openness, we've allowed you to reproduce the demo on your HANA Cloud Build your own Wikipedia Keynote Part 1 - Build and load data. In addition, we'll be opening the kimono on the technical details of this demo in coming weeks.

It was around 5 PM when I saw the dusk painted ceiling of the Grand Canal Shoppes as the escalator reached the second floor of the Venetian casino.  After a day of sessions, demos, and a StarTrek inspired keynote I had forgotten that I was in Vegas, as surprising as it sounds I had been engulfed by SAP TechEd && d-code. Then again can you blame me?

IMG_1400[1].JPG

 











  


Day one was kicked off by Bjorn Goerke’s keynote whose “One truth, one platform, and one experience” quote resonated with the audience.

keynotetruth.JPG\keynotetruthholger.JPG

    

Other announcements that garnered attention were the confirmation that the SAP HANA SPS09 release would be available by Christmas, the availability of the SAP Simpler Choice DB Program and that SAPUI5 was now OpenUI5 kept pace with SAP’s statement in becoming an open company who is embracing open source.


keynoteopen1.JPG keynoteopen2.JPG

    

Following the keynote the show floor was inundated with guests. The show floor consisted of multiple areas including the Platform and Technology Showcase, SAP CodeJam and Code Review Area, Hacker’s Lounge, Expert Networking Lounge, and Product Roadmap Q&A. We also had SAP partners exhibit at the exhibitor area. Here is a highlights of day 1 and 2.

 

IMG_1230[1].JPG IMG_1282[1].JPG

IMG_1304[1].JPG IMG_1293[1].JPG

IMG_1363[1].JPG IMG_1376[1].JPG

You can find more pictures of the event on SAP HANA and SAP Technology Facebook. Thanks for being part of the first two days of SAP TechEd && d-code, we still have two more to go so please stay tuned!

HANA Promo blog.jpgThe best stories are told by customers themselves. 


The following eBook, compiled of stories by Bloomberg and Forbes, provides insight into the heavy momentum SAP HANA is playing in the market illustrated via SAP customer stories.

 

Many organizations are running and reaping the benefits of SAP HANA including: Adobe, Alliander, ARI, Carefusion Commonwealth Bank, City of Boston, City of Cape Town, ConAgra, eBay, EMC, Florida Crystals, Globus, HP, HSE24, Johnsonville, Kaeser Compressors, Mercedes-AMG, Norwegian Cruise Line, Nomura Research Institute, National Football League (NFL), Maple Leaf Foods, Southern California Edison, and T-Mobile.

 

Each of the 23 case studies in this eBook provide a complete overview of the SAP customer; the customer’s top objectives; the solution resolution; and key business and technology benefits of each SAP customer engagement.

 

The customer case studies feature key innovations such as:

 

  • - SAP Business Suite Powered by SAP HANA
  • - SAP Business Warehouse powered by SAP HANA
  • - Big Data
  • - SAP HANA Applications

 

Click here to view the eBook today.


In addition, please check out the SAP HANA Use Case Map.  This self-service interactive PDF will help you explore real-world customer use cases applicable to your own business needs. 


Download the PDF here.


Try Simple.  Buy Simple.  Run Simple.

SAP has new offers to get you quickly realizing the benefits of SAP in-memory and data management solutions.   Today, at TechEd && d-code Las Vegas, we announced the availability of the SAP Simpler Choice DB Program.  This program is designed to make it easy for you to adopt SAP in-memory and data management solutions through a range of compelling tools and offers.    


Here's how:


Try Simple: We’ll get you started for free

Try Simple.jpg

 

The SAP data management solutions change the cost equation through simplification. It helps save costs on hardware and software, as well as reducing the labor required for administration and development needs. Now with the Try Simple program, SAP provides the resources to 1) help you assess –your current IT landscape complexity, 2) discover what it’s costing you, and 3) ascertain where you can save time and resources – enabling you to drive new innovations.

Offers:

  • SAP Industry Value Engineering Services will engage with you in a benchmarking survey to help estimate how SAP databases can significantly reduce the TCO associated with managing data and dramatically simplify IT landscapes
  • Landscape Assessment Services for SAP Cloud (HANA Enterprise Cloud) will help you evaluate and assess the benefits of cloud application deployments
  • SAP Database Trial offers or cloud and on-premise deployments:
  • SAP ERP powered by HANA Trial
  • SAP CRM powered by HANA Trial
  • SAP BW powered by HANA Trial
  • SAP HANA on AWS Test Drive
  • SAP ASE Developer EditionAWS
  • SAP ASE Developer Edition Download
  • SAP HANA Cloud Platform Trial
  • SAP IQ express download or full-use trial Trial

Buy Simple: We’ll protect your investment

Buy Simple.jpg

SAP has simplified licensing terms to allow you to mix and match SAP data management products for deployment in any SAP application scenario – providing greater protection for your SAP database investments as your needs evolve.

 

  • Migration services are provided and compelling offers delivered to lower the risk and cost of a database migration
  • Flexible deployment options are delivered, whether on premise or in the cloud
  • Simpler licensing terms and complete protection for SAP database investments are provided, which also evolves as your business requirements advance

 

Run Simple: We’ll help you migrate

Run Simple.jpg

SAP lowers the risk of migrating to SAP databases — on premise or in the cloud — with SAP services and other compelling offerings.

  • Lower the cost and risk of migration via services credit for database migrations
  • Reduced maintenance costs during the period of migration, so you can fully test the new environment

 

Ready to get started?  Want to learn more?  Please contact your AE or complete the form to have an SAP representative contact you.


News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.

Ask SAP HANA

Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions:

Χ