Home > Blog > Blog
1 2 3 52 Previous Next

Blog

768 Posts

On April 10th this year we had an extremely successful HANA Group event at SAP Palo Alto called ‘HANA Today’ with over 260 attendees globally. Now we want to repeat that and do it better.


Due to a large group pent-up demand about new developments on IoT (Internet of Things) and IoE (Internet of Everything), we are holding our ‘HANA4IoT’ event on November 11th, 2014 again at SAP Palo Alto, CA, in the COIL Lab Building 1.   Don't miss this upcoming event !!


Our sponsor this time is Cisco - the global leader in IoT communication and network devices. Cisco connects the unconnected with open standard, integrated architecture from the cloud to end devices – with exceptional reliability and security.  Cisco is a SAP certified unified HANA Hardware partner, and now fastest growing server provider with their UCS servers for SAP HANA.  Cisco is also a SAP HANA customer; they reap benefits of SAP HANA to get dynamic insights for their sales executives.  So, we are in great company for topics on IoT.   The event, as before, will be from 1PM to 5PM. Same spot. This will be followed by a one-hour Happy Hour for networking, including light snacks and drinks.


You need to do two things immediately:


FIRST: Your need to be a member in the SAP HANA Group for this event. This is SAP’s official SAPHANA Social Networking group. If not please register here: In-Memory SAP HANA | LinkedIn


SECOND: REGISTER early for two reasons. Firstly, we only have 70 seats at the COIL Lab in SAP Palo Alto. It’s going to be a priority-based confirmation. Confirmation for Physical attendance will be sent prior to the event. Secondly, the SAP Connect webcast link will only be sent to “Registered” attendees. So whether you are physically with us or remotely participating, you can only attend by registering.

 

URL: https://docs.google.com/forms/d/15a6WT0U9OGngLsCV02HIsptT_H_XmiDwuSXjSJAQIDg/viewform?usp=send_form

 

KEEP TUNED FOR MORE DEVELOPMENTS, and plan to attend live at Palo Alto or remotely from wherever you are on the planet (which is getting smaller every day). Details will be communicated only to registered attendees from this point forward.


FINALLY: Attend the event, physically at Palo Alto or remotely from anywhere on the globe via the online connect session. Instructions (URL) for joining online will be sent with your registration confirmation.

 

Here is our AGENDA: 

 

Start

End

Topic

Who

12:30PM

1:00PM

Arrive/ registration/Coffee

Registration

1:00PM

1:15PM

Welcome and start up

Scott Feldman/ Hari Guleria

1:15

1:45

Keynote –SAP HANA Strategy

Prakash Darji

1:45

2:15

SAP IoT Strategy

Yuvaraj Athur Raghuvir

2:15

2:45

SAP HANA –SP 9 & Cloud Updates

Sap Prod Mgmt

2:45

3:00

Coffee and drinks break

<break>

3:00

3:30

HANA 4 IoT – Competitive Differentiator

Hari Guleria

3:30

4:00

Suite on HANA- Current Developments

Amr El Meleegy

4:00

4:30

Smart Mining – HANA4IoT Demo

Cisco

4:30

5:00

SAP HANA Q&A Session to Speakers

(Speakers)

5:00

5:15

Wrap Up - Floor Drawings and prizes (Must be present to win)

5:15

6:00

HAPPY HOUR for Networking with Wine, Beer and bites



Any questions, contact: hari.guleria@pridevel.com or scott.feldman@sap.com

Follow us on Twitter:  @sfeldman0 @HariGuleria @SAPInMemory

Keep an eye out for: #SAPHANA and #IoT



We all like innovation. We always see an opportunity for improvement, whether it’s in a product or in business process. Actually the economy is based on a permanent strive for change and the worst for it is stagnation. From time to time we experience massive technology changes and they are disruptive, sometimes very disruptive.


Think about ocean liners replaced by jets, break bulk freighters by container ships, land line phones by cellular phones, disk databases by in memory databases. The replacement technology allows for completely a different schedule, cost, flexibility or in case of the databases new business processes. What do we do with the existing infrastructure? Can it continue to be used? Can it be refurbished?


What happened to the ocean liners - they became hotels or cruise ships, but only for a short time. yes, some freighters were converted in the early days of container logistics, but again they didn't fit in the long run. And how did our life change with cellular phones. People don't remember any more the lines in front of a telephone booth at the airport or on a busy street. So, the changes will happen, they are part of the innovation process. To fight the changes is counterproductive, it costs extra energy and may not even not even work.

 

Despite all that SAP talked about non disruption as a strategy. Other IT companies claim they promised 30 years ago full upward compatibility and can show to keep the promise. Let’s have a look where these strategies make sense and where they will fail.

There are currently a few mega trends in IT, changing the way it supports business:

  1. SaaS, applications running in the cloud and offered as a service. Completely new are the generic shared service ones, like marketplaces,    business networks, etc providing the same set of services to many clients while connecting trading partners. Also more traditional enterprise applications are offered as a complete service and run for each client separately, despite sharing some system services through multi tenancy concepts to reduce cost, pretty much like the shared services in an apartment building. 
  2. The IoT (internet of things) will flood us with data coming from a myriad of sensors to report the well being or problems of expensive machinery. What was already yesterday standard for aircrafts, we will soon see in drilling machines or even toasters.
  3. The sprawling social networks have become a basic part of our live and as such give a testimonial about what we like, don’t like (remember thumbs up/down) and have become a vital source of information for business.
  4. On a much smaller scale, because it’s happening inside the applications, we see how in memory databases replace disk based ones at a rapid pace.

 

 

How does SAP play out the ‘non disruption’ strategy when faced by these mega trends? If you want to deal with textual data, digest billions of sensor messages and be able to work as an SaaS application in the cloud, SAP opted for a completely new platform for it’s enterprise applications. HANA is not only an in memory database using columnar store instead of row store but offers libraries for business functions, predictive math algorithms, OLTP and OLAP functionality in one system, distributed data management for data marts or IoT solutions.

 

 

Technology wise HANA is truly disruptive but that doesn’t mean everything has to change at least not instantly. Let’s have a look at the ERP system of SAP. Being a success story for over 20 years, thousands of companies have invested billions to set up the system, maintain it over the years and developed customer specific add ons for competitive advantage. There is a tremendous business value captured in the system configuration and the captured data. SAP kept both intact moving forward from anydb to HANA. No data will be lost and the configuration parameters stay intact. Thanks to one of the great standards in IT, the SQL interface, all programs  can continue to run unchanged. That’s the first step and guarantees a smooth transition from anydb to HANA. But HANA is disruptive and the unbelievable speed improvements allow us to drop some concepts of the 90ties to guarantee short response times. In sERP SAP could show that the transactional update of hierarchical aggregates as introduced in the days of MIS (management information system) are not necessary any more. Instead any kind of aggregation for reporting or analytical purposes is now happening on demand. Also the various replicas of transactional data in different sorting sequences are no longer a performance benefit. Once running on HANA all the programs accessing those data objects have to be changed. But it’s happening semi automatic. The old data structures are replaced by SQL views with the identical functionality and a similar name. The programs continue to run without any further change. Now we can drop the redundant data structures and gain a 2x reduction in the overall data footprint.


Now the question is, shall we stop here, or do we continue to take advantage of the new properties of a columnar store in memory? The traditional aggregation rules, implemented as a program maintaining the totals are more than 20 years old and not very important any more. Now many different selections, aggregations, comparison, predictions are possible because the transactional data is kept at lowest level of granularity and all further processing happens in algorithms on demand and not as part of a transaction any more. New programs will be added and supercede the old functionality but they come in parallel and as such continue to support the ‘non disruptive’ paradigm. There is a disadvantage with this strategy -  it takes more time. But it’s worth it. All customers can move gradually forward, keeping major accomplishments of the past unchanged. A similar approach is being used for the introduction of the new UI. The FIORI apps drop in in parallel and the user or user groups have time to adjust to the new layout and interaction model.

 

More radical changes come as an option. All enterprise applications will go to the cloud and be managed by the service provider. The system maintenance will accelerate significantly. The dramatic reduction in complexity of the data model and the removal of technically more critical update tasks let to a system in which data inserts and read only data retrieval dominate. When less data changes are happening in a system the stability and availability of the system increase. Most of the application components are by definition read only and can therefore join the system at any time. The dependency of code and data is now at a completely different level. This is a prerequisite for a successful use as SaaS.

 

It sounds surprising that the change to the HANA platform is the bases for the advances, but that was always the idea of platforms. They over services all applications need and shield them from the ongoing changes in technology. The final product, sERP looks and feels fundamentally different, solves problems which were unthinkable yesterday and is still carrying the business configuration and the enterprise data to the future nearly without changes.    

 

The reduction of the complexity of transactional data has even more dramatic consequences. We see now a reduction in the data footprint of 10 - 20x while keeping all data in memory. If we split data into actual (necessary to conduct and document business) and historical (no changes allowed any more) we can further optimize the database processes and reduce the amount of data kept in memory.

 

There were two reasons to split up enterprise systems into erp, crm, srm, scm, plm, hcm as transactional systems and a separate business data warehouse. First the sheer size of the systems outgrow the single computer capacities and we split them up. Second, once we had independent subsystems, we could develop them at different speed using different technologies. Having them all moved to a brand new platform, the HANA platform, both the size and the speed argument are not valid any more. All systems can be reintegrated now, eliminating the enormous data transfer between the systems. The management of one single system with the above components is times easier and less costly, especially considering the advances in maintenance strategy as mentioned above. The separate data warehouse still has value but many of the operational reporting and analytics can now come back to the transactional system. Capacity concerns aren’t any longer valid, the replication of the actual data partition is the answer and contributes to HA (high availability) one the other hand.

 

Running in the cloud it becomes much easier to integrate the simplified business suite with other services in the cloud. The future enterprise solutions will make use of all the generic business services like ARIBA, Concur, Fieldglass, Success Factors and many others. The last question  is, will eventually everything run in the cloud? No, but it will run first in the cloud. There is no principal limitation for cloud software to run on premise. The financial terms may be different, the maintenance rythm will be different  but all innovation finally will spill down to the on premise versions, even, if technically viable, including the ones on non HANA platforms.

 

We do mitigate the consequences of disruptive innovation, but we do not continue to carry on the past forever as nobody boards an ocean liner any more to go new york, or lives without a cellular phone, or ships cargo as a discrete item. We carry forward the established business processes, complement them with new ones and finally phase some of  them out.

 

By the way this all should happen without any downtime for the business.

Do you remember all the times you stored the results of a database query in addition to the original data for performance reasons? Then you probably also recall the significant drawbacks that go along with these so-called materialized views: they introduce data redundancy that makes your data model more complex, requires additional database operations to ensure consistency, and increases the database footprint. In this blog post, we demonstrate that, thanks to SAP HANA’s unique in-memory technology, you can simplify your data model by getting rid of materialized views.

 

This is the second part of our deep dive series on SAP Simple Finance, SAP’s next-generation Financials solution. In the first part, we show how SAP Simple Finance uses the capabilities of SAP HANA to simplify Financials and deliver non-disruptive innovation by removing redundancy. This brings significant benefits: the core data model is as simple as possible with two tables for accounting documents and line items, the database footprint shrinks by orders of magnitude, and the transactional throughput more than doubles.

 

The benefits are convincing and SAP Simple Finance demonstrates that it can be done. You may ask yourself how this is technically possible and whether you can take the same approach for your applications by running on SAP HANA. The following paragraphs summarize our answers and the longer article below gives more details. Furthermore, the next blog post in the deep dive series will explore the specific case of materialized aggregates, which refer to redundantly stored aggregation results.

 

The following example shows the motivation for materialized views in traditional database systems: You have an SQL query that selects database rows based on several parameters, for example, all open items for a particular customer. Executing this query against a large base table requires scanning through the whole table of all accounting document line items in order to find the rows that match the selection criterion. In a traditional, disk-based database system, this may be too slow for practical purposes. The alternative is building up a materialized view that explicitly stores the smaller subset of open items and is constantly updated. When querying open items for a particular customer, the database then only needs to scan through the smaller materialized view, resulting in a sufficiently fast response time also on disk-based database systems.

In view of the significant drawbacks of materialized views, the goal is to replace materialized views with on-the-fly calculation. The numerous benefits of getting rid of materialization include an entirely new level of flexibility, increased throughput, and simplicity (for more details, see the long article). The costs of doing so are actually minor, as we outline below: in fact, in-memory response times of on-the-fly calculated queries are typically faster than queries against materialized views on a disk-based database. As illustrated in Figure 1, this tips the seesaw in favor of removing materialized views.

 

blog_simplefinance_v2.1_figure1-cropped.png

Figure 1: Replacing Materialized Views with on-the-fly calculation

 

Looking at in-memory database systems only, materialized views are almost never necessary nor beneficial thanks to the superior performance. We show below that in-memory technology shifts the break-even point in a way that materialization is only beneficial in rare circumstances of highly selective queries. A single core of a CPU is able to scan 1 billion line items in less than half a second. In the same time, a disk-based system could only access 50 random disk locations (based on a latency of 10 ms).

 

In line with this reasoning, SAP Simple Finance took the opportunity offered by SAP HANA and removed materialized views from the data model: tables such as all open Accounts Receivable line items (BSID) have been replaced non-disruptively by compatibility views calculated on-the-fly (for details on all changes, see the first part of this series). The same applies to materialized aggregates such as total balance amounts for each customer per fiscal period (KNC1). Hence, in the next part of the series, we continue our deep dive by looking at queries that include aggregation functions and how they can be tackled similarly.

 

This blog post continues after the break with an in-detail look at the points that we have summarized so far. We first look at the concept and maintenance of materialized views. Afterwards, we investigate the implications of materializing views and provide decision support to get rid of materialization.

 

 

 

 

In the following, we first always consider in-memory database systems only and the new opportunities they enable when considering whether to materialize or not. The comparison of in-memory to disk-based database systems is then considered separately. Simply accessing a pre-computed value will always be faster than computing it by running over multiple tuples, even in an in-memory database. The difference is that with the speed of in-memory technology it has now become feasible to dispense of the materialization, because computation on-the-fly is fast enough in most cases, especially compared to traditional disk-based database systems and typical disk latencies of 10 ms. We investigate the situations where systems can dispense of materializing views or aggregates thanks to the speed of SAP HANA and show that materialized views or aggregates are unnecessary – and, thus, harmful – in almost all scenarios.

 

The Concept of Materialized Views and Their Maintenance

A view represents the result of a stored query on the database. Essentially, it is a named SQL query that can be queried like any table of the database. In the following, we focus on the case of a single base table with arbitrary selection conditions. The following assumes a query with projection and selection, but does not consider joins of tables as the base relation for reasons of simplicity. We neglect aggregation in this section, so that a view always references a subset of tuples from a base relation according to a query.

 

A materialized view explicitly stores copies of the corresponding tuples in the database (Gupta and Mumick: “Maintenance of Materialized Views: Problems, Techniques, and Applications”; IEEE Data Eng. Bull., 18(2); 1995). In the absence of aggregates, materialized views have the same granularity level as the base tables. If the query also aggregates the items, we speak of materialized aggregates – they will be covered in detail in the next blog post. In contrast to a simple index on a table column, a materialized view describes semantic information, as the selection criteria can be more complex than a simple indexation by one value.

 

If an item matches the condition of the materialized view, those properties of the item that are part of the view’s projection are redundantly stored. Whenever the base tables are modified, it may be necessary to modify the materialized view as well, depending on the modified tuples and the view’s selection criteria. There are several cases to consider:

  • Inserting a new tuple into a base table that matches the criteria of the materialized view requires inserting it into the materialized view.
  • As part of an update of a base table, a change of a tuple’s properties does not only have to be propagated to copies of the tuple (update operation), but may also result in the whole tuple now being newly included or excluded (insert / delete) if the new values of some properties changes the value of the materialized view’s selection criterion.
  • When deleting a tuple from the base table, all copies in materialized views have to be deleted as well.

 

In summary, each materialized view leads to additional database operations whenever data in the base table is being modified. Instead of just modifying the base table, additional operations are required to maintain consistency in the view of redundant data. In a system with several materialized views, each transaction may require several times more modifying database operations than would be necessary to just record the change itself in the base table. This lowers the transactional throughput of the whole system as the number of costly modifying operations increases and locking frequently leads to contention.

 

This also applies in case of lazy materialization. A lazy maintenance strategy only modifies materialized views when they are accessed (Zhou, Larson, and Elmongui: “Lazy maintenance of materialized views”; in: Proceedings of VLDB 2007). However, in the typical OLTP workload of an enterprise system, both modifying transactions and reading queries happen so frequently and intermingled that the number of additional operations due to materialization remains the same: almost all transactional modifications will be followed by queries accessing the materialized views that require propagating the modifications.

 

Hence, materialized view maintenance adds to the operational complexity of a database system, requires additional modifying operations and lowers the overall system transactional throughput. Furthermore, there is a cost associated to the additional storage that is required for the redundant data which can be substantial in size. These drawbacks have to be balanced against the main benefit of a materialized view: the increased performance of queries against the view as the query underlying the view does not have to be evaluated on each access.

 

Implications of Materialized Views on Database Size and Query Performance

A materialized view on one base table (which acts as a sophisticated index) will always be smaller than the base table in terms of number of tuples and overall size (or equally large in case of an exact duplicate). However, in case of multiple materialized views on the same base table that are not mutually exclusive the overall size of materialized views in a database schema can be larger than the base tables. The more and more materialized views have been added for performance reasons in the past, the more storage space is taken up by the redundant data.

The drawbacks of a materialized view can thus be summarized as follows:

  • Reduced throughput due to the overhead on each update, insert, or delete.
  • Increased storage space for the materialized redundant data.

 

This has to be weighed against the potential impact on performance. The following calculations will show that the shift to in-memory technology diminishes the difference in performance between materialization and on-the-fly calculation, making the former much less worthwhile.

 

Let us assume that the base table contains n tuples, of which a given view selects m through its selection condition. These m tuples would be stored redundantly in a materialized view. The ratio  describes the selectivity of a query. The higher this factor, the more selective the query is. Any query that accesses the view will usually apply further selections on top.

 

Two (inter-related) factors influence the performance impact of a materialized view for such queries on an in-memory column store. The impact will be even larger when compared to a traditional, disk-based row store.

  1. Already materialized result: The materialized view has already applied the selection criteria against the base table and thus queries accessing the materialized view do not perform the column scans that identify the m tuples of the view out of all n tuples of the base table again.
  2. Smaller base for selections: The additional selection of queries directly operates on the smaller set of records as the result of the view has been physically stored in the database. That is, the necessary column scans operate on attribute vectors that contain entries for m instead of n tuples. The smaller input relation influences performance proportional to the selectivity factor .

 

In both cases, the extent of the performance impact of a materialized view depends on the ratio of n to m. On an abstract level, the operations necessary for a query with and without materialized view can be compared as follows – again, both times looking at an in-memory database:

  • Without a materialized view, the response time will be proportional to n, as all full column scans will operate on attribute vectors with n entries.
  • With a materialized view in place, the response time of a query will be proportional to m, the smaller number of entries contained in the materialized view.

 

Influence of Selection Criteria

 

In addition to the number of tuples in base table (n) and view (m), let’s furthermore assume that the selection of the view depends on c different columns. Any query that accesses the view may apply further selections on top, taking into account d columns for the selection; e columns thereof have not already been part of the initial view selection.

 

With regard to the two factors outlined in the main part, the selection criteria then have the following effect:

  1. Already materialized: Assuming that independent, possibly parallel column scans are the fastest access path due to the data characteristics, the materialization already covers the scans over the c columns, each with n entries, that are part of the view selection.
  2. Smaller base: With a materialized view, the d additional selections of queries require d column scans on columns with m entries, instead of n entries without materialized views.

 

When now comparing the situation with and without materialization, it has to be kept in mind that in the absence of a materialized view, some of the additional selections overlap with the view selection criteria and can be combined into a single column scan. Hence, only e additional scans besides the c attributes are necessary (but, of course, on a larger set of data).

 

Query against materialized view

Query without materialized view

d column scans, each with m entries (assuming independent, possibly parallel access)

(c+e) column scans, each with n entries

Proportional to d × m

Proportional to (c+e) × n (c+ed, nm)

 

For deciding whether to materialize a certain view, the difference in the number of columns to consider for the selection (d vs c+e) is significantly smaller and has thus less influence on the performance compared to the difference in the number of entries to scan (n vs m). In turn, the selectivity factor remains most important.

In summary, the performance will only improve by the selectivity factor . The more detailed calculations in the side bar also take into account the selection criteria and show that the selectivity still is the most important factor.


In addition to restricting the number of tuples to consider, a materialized view may also include only a subset of the base columns. However, for the performance of queries in a columnar store, it does not matter how many columns from the base relation are projected for the materialized view: in contrast to a row store, each column is stored entirely separate. Adding more columns to the materialized view does not impact the performance of any query on the materialized view if the query explicitly lists the columns in its projection (which should be the case for all queries, as SELECT * queries are detrimental to performance in row and column stores alike, besides other disadvantages such as missing transparency of source code). Duplicating more columns does, of course, increase the storage size. In general, a materialized view should encompass all columns that are relevant to the use case in order to increase its usefulness, because the materialized value can only be accessed by a query if all required columns have been materialized. In turn, keeping redundant data thus gets more costly in terms of required storage and complexity of modifications.

 

Decision Support – To Materialize or Not To Materialize?

The linear cost model described above has long been used in materialization discussions and has been confirmed experimentally (see Harinarayan, Rajaraman, and Ullman: “Implementing Data Cubes Efficiently”; in: Proceedings of SIGMOD 1996). It is especially suitable for columnar in-memory database system, because these store the entries of each column sequentially.

 

The first step when deciding whether to materialize or not in a columnar in-memory database thus consists of analyzing the selectivity of the query underlying the view. Based on the above, a materialized view may be reasonable performance-wise only if the following two criteria were fulfilled:

  1. Absolute measure: Does the performance without materialization not meet expected requirements?

    In an in-memory database system such as SAP HANA, queries run much faster than in a traditional database system (see Real HANA Performance Test Benchmarks). This means that many queries with a previously bad performance perform sufficiently fast enough in an in-memory database and therefore require no further optimizations (such as materialization). For example, imagine a view on a table with 1 billion line items. Each entry in the column of the selection criterion takes up 2 byte (after dictionary compression). Scanning the whole column of 1907 MB takes less than half a second using a single core, assuming a memory processing speed of 4 MB per ms per core (1907 MB divided by 4MB/ms per core = 477 ms per core). Even with only four cores, which is nowadays commodity hardware, 8 different attributes could be scanned in parallel in still under a second without any materialization.

  2. Relative measure: Is the performance with materialization significantly better than without?

    Even if according to the absolute considerations a speed-up would be beneficial, the performance would still have to be compared and the potential performance advantage traded off with the disadvantages of materialization (mostly lowered throughput and increased database size).

    The performance savings will be proportional to the selectivity factor . If m is not orders of magnitude smaller, but for example only 10% of the base size, materializing will thus not yield significant savings. Instead, other means to increase the performance would be necessary.

    The additionally required storage is proportional to m/n-th of the base table. A large share of columns will typically be replicated in this scenario in order to not restrict the usefulness of the materialized view. For example, the materialized view of open customer items in SAP ERP Financials (BSID) replicated half of the columns of the accounting document line items table BSEG.

 

In summary, the need for materialized views as described above vanishes with in-memory columnar databases. Materialization is simply not needed to provide fast access similar to an index. Figure 2 (repeated from above) highlights why eliminating materialized views is preferable now: the impact on response times compared to accessing a materialized view is less significant as in-memory technology reduces the overall performance. This is done in a non-disruptive way by instead providing a non-materialized compatibility view that represents the same query as the former materialized view, but is calculated on-the-fly. Applications seamlessly access this virtual view without requiring any modifications. We already explained the topic in a corresponding chapter of our last blog post and will dive deeper in a future blog post.

 

blog_simplefinance_v2.1_figure1-cropped.png

Figure 2: Replacing Materialized Views with on-the-fly calculation

 

The break-even point at which a materialized view becomes beneficial for performance reasons is reached much later in terms of the selectivity of the underlying query. For view queries with low selectivity, a materialized view constitutes almost pure overhead because the performance without materialization is nearly the same and, moreover, acceptable in absolute terms. The benefit of materialization gradually increases with the selectivity. However, the benefit in terms of performance – depicted in Figure 3 below as the distance between the lines of the in-memory scenario – has to be balanced against the cost.

blog_simplefinance_v2.1_figure2.png

Figure 3: Query performance depending on selectivity

 

Not relying on a materialized view improves flexibility, increases the transactional throughput, lowers complexity, and reduces storage costs. Additionally, the performance impact of materialization as experienced by users diminishes with in-memory technology. The effect on the break-even point beyond which the benefit of a materialized view outweighs its costs is depicted in the following Figure 4. With the move from traditional, disk-based database systems to in-memory systems, even the most selective queries do not sufficiently benefit from materialization to outweigh the costs.

blog_simplefinance_v2.1_figure3.png

Figure 4: Shift of break-even point of materialization thanks to in-memory database

 

The above reasoning also holds true when looking at the complexity of queries instead of (or in addition to) selectivity: even in case of the most complex queries does the performance impact of materialization no longer outweigh the costs.

 

The case of SAP Simple Finance demonstrates these points in more detail, as SAP Simple Finance removes materialized views in an entirely non-disruptive manner. It demonstrates that above calculations on the feasibility of removing materialized views indeed apply in practice. In an example SAP system, BSID (open Accounts Receivable line items) contains roughly every 300th item from the base table BSEG. Even for this already moderate selectivity, removing the materialized view has been feasible. Each query on BSID now transparently accesses the corresponding compatibility view so that the entire result is calculated on-the-fly.

 

The second building block of the removal of materialized redundancy in SAP Simple Finance is the replacement of materialized aggregates, which we will discuss in the next blog post.

10 days before SAP TechEd, Steve Lucas called and asked if we could replicate the Wikipedia page views demo that Oracle produced during their Annual OpenWorld in early October. For those who haven't seen it, it is a download of the 250bn rows of Page view statistics for Wikimedia projects, which are stored in hourly files since 2007. People always ask how we got the same dataset - it's publicly available at the link above.

 

There were two real challenges - first, my math calculated that we needed roughly 6TB of DRAM just to store the data, and second that we had to download, provision, and load 30TB of flat files in just 10 days, and replicate the app.

 

To solve the first problem, the folks at SGI offered to build a SGI UV300H scale-up appliance with 32 sockets, 480 cores and 12TB of DRAM. This configuration would normally come with 24TB of DRAM but we didn't need that much for this demo, so we downsized slightly. Once I knew we had this appliance secured, any concerns about performance evaporated, because I knew this appliance could do 1.4 trillion scans/sec, which is around 5x what Oracle demonstrated on their SuperCluster.

 

For the second problem, the folks at Verizon FIOS came out the next day and upgraded equipment so we could get internet fast enough to get the data down onto USB2 had disks, which we promptly shipped to SGI's office in Illinois. Thanks Verizon!

 

This would probably be a great time for you to go ahead and watch the video, so go right ahead to see what we built in 3 days!



Response time at the end-user

 

As Steve rightly points out, there are some super-smart people at Oracle, but the first thing that got me about their demo was that the response time on the web interface seemed to be quite slow: 4-5 seconds. Despite this they claim sub-second response times, so I assume they are measuring response time at the database and not at the end user.

 

For the HANA example, we look at performance in Google Chrome Developer Tools because that's what users experience - the time from button click, to graph. And because HANA is an integrated platform, we see performance that - to my eye - crushes Oracle's $4m SuperCluster with a system a fraction of the cost and complexity.

 

In my testing, we regularly saw 3-400ms response time but we sought to mimic how customers use systems in the real world, so we ran the SGI system in their lab and connected from the laptop in the keynote theatre over the internet - that's over 1750 miles away. That distance is around 50ms round trip at the speed of light, so raw physics has an impact on our demo performance!

 

Simplicity and Agility

 

HANA has a number of features that make a demo like this possible in a short period of time, and those features are just as useful to developers in the real world.

 

First, almost no model optimization is required. The model design was completed in a few minutes. This is very significant - some databases are very sensitive to model design, but it was just necessary to follow simple best practices on HANA.


Second, HANA self-optimizes in several critical ways. For a start it automatically defines a table sort order and sorts the columns according to this. It will also define (and re-define) the best compression algorithm for the exact data in each column. When the database is quiet, you will often see little background jobs that optimize tables - and table sizes will decrease.

 

Third, HANA information views allow you to graphically define models, predicative algorithms and other sophisticated data access mechanisms. These allow you to control how end-users access the information.

 

If you contrast this with Oracle 12c in-Memory, Oracle is a real pain in the butt. You have to define compression and in-memory setting for every table and column, and you have to ensure the row store is sorted, because the column store can't sort itself (column store caches are built on start-up as a copy of the row store). It is a maintenance headache.

 

HANA as an integrated platform

 

The most significant benefit that HANA brings for these scenarios is that it collapses all the layers of the app into one in-memory appliance. The database, modeling, integration and web layers all sit within the HANA software stack and are one piece of software that comes pre-configured out the box. That's one of the reasons why we can build a demo like this in just a few days, but it's also the reason why it is so screamingly fast.

 

This is a pretty big dataset, so we see 4-500ms response times, but for smaller datasets we often get 10-30ms response times for Web Services at the browser, and that provides what I would call an amazing user experience.

 

HANA's web server includes a variant of the OpenUI5 SDK and we used this to build the apps. It provides a consumer-grade user experience and cuts the build time of complex apps.

 

Final Words

 

Building a demo like this in 10 days was a logistical feat by any standards, but I don't think we could have done it on a database other than HANA. The agility, simplicity and performance of HANA, made this possible at all. The integrated platform aspects of HANA meant that it was possible not only to show HANA providing a differentiating user experience, but also possible to extend the demo with predictive algorithms in the short time available.

 

Since we're passionate about openness, we've allowed you to reproduce the demo on your HANA Cloud Build your own Wikipedia Keynote Part 1 - Build and load data. In addition, we'll be opening the kimono on the technical details of this demo in coming weeks.

It was around 5 PM when I saw the dusk painted ceiling of the Grand Canal Shoppes as the escalator reached the second floor of the Venetian casino.  After a day of sessions, demos, and a StarTrek inspired keynote I had forgotten that I was in Vegas, as surprising as it sounds I had been engulfed by SAP TechEd && d-code. Then again can you blame me?

IMG_1400[1].JPG

 











  


Day one was kicked off by Bjorn Goerke’s keynote whose “One truth, one platform, and one experience” quote resonated with the audience.

keynotetruth.JPG\keynotetruthholger.JPG

    

Other announcements that garnered attention were the confirmation that the SAP HANA SPS09 release would be available by Christmas, the availability of the SAP Simpler Choice DB Program and that SAPUI5 was now OpenUI5 kept pace with SAP’s statement in becoming an open company who is embracing open source.


keynoteopen1.JPG keynoteopen2.JPG

    

Following the keynote the show floor was inundated with guests. The show floor consisted of multiple areas including the Platform and Technology Showcase, SAP CodeJam and Code Review Area, Hacker’s Lounge, Expert Networking Lounge, and Product Roadmap Q&A. We also had SAP partners exhibit at the exhibitor area. Here is a highlights of day 1 and 2.

 

IMG_1230[1].JPG IMG_1282[1].JPG

IMG_1304[1].JPG IMG_1293[1].JPG

IMG_1363[1].JPG IMG_1376[1].JPG

You can find more pictures of the event on SAP HANA and SAP Technology Facebook. Thanks for being part of the first two days of SAP TechEd && d-code, we still have two more to go so please stay tuned!

HANA Promo blog.jpgThe best stories are told by customers themselves. 


The following eBook, compiled of stories by Bloomberg and Forbes, provides insight into the heavy momentum SAP HANA is playing in the market illustrated via SAP customer stories.

 

Many organizations are running and reaping the benefits of SAP HANA including: Adobe, Alliander, ARI, Carefusion Commonwealth Bank, City of Boston, City of Cape Town, ConAgra, eBay, EMC, Florida Crystals, Globus, HP, HSE24, Johnsonville, Kaeser Compressors, Mercedes-AMG, Norwegian Cruise Line, Nomura Research Institute, National Football League (NFL), Maple Leaf Foods, Southern California Edison, and T-Mobile.

 

Each of the 23 case studies in this eBook provide a complete overview of the SAP customer; the customer’s top objectives; the solution resolution; and key business and technology benefits of each SAP customer engagement.

 

The customer case studies feature key innovations such as:

 

  • - SAP Business Suite Powered by SAP HANA
  • - SAP Business Warehouse powered by SAP HANA
  • - Big Data
  • - SAP HANA Applications

 

Click here to view the eBook today.


In addition, please check out the SAP HANA Use Case Map.  This self-service interactive PDF will help you explore real-world customer use cases applicable to your own business needs. 


Download the PDF here.


Try Simple.  Buy Simple.  Run Simple.

SAP has new offers to get you quickly realizing the benefits of SAP in-memory and data management solutions.   Today, at TechEd && d-code Las Vegas, we announced the availability of the SAP Simpler Choice DB Program.  This program is designed to make it easy for you to adopt SAP in-memory and data management solutions through a range of compelling tools and offers.    


Here's how:


Try Simple: We’ll get you started for free

Try Simple.jpg

 

The SAP data management solutions change the cost equation through simplification. It helps save costs on hardware and software, as well as reducing the labor required for administration and development needs. Now with the Try Simple program, SAP provides the resources to 1) help you assess –your current IT landscape complexity, 2) discover what it’s costing you, and 3) ascertain where you can save time and resources – enabling you to drive new innovations.

Offers:

  • SAP Industry Value Engineering Services will engage with you in a benchmarking survey to help estimate how SAP databases can significantly reduce the TCO associated with managing data and dramatically simplify IT landscapes
  • Landscape Assessment Services for SAP Cloud (HANA Enterprise Cloud) will help you evaluate and assess the benefits of cloud application deployments
  • SAP Database Trial offers or cloud and on-premise deployments:
  • SAP ERP powered by HANA Trial
  • SAP CRM powered by HANA Trial
  • SAP BW powered by HANA Trial
  • SAP HANA on AWS Test Drive
  • SAP ASE Developer EditionAWS
  • SAP ASE Developer Edition Download
  • SAP HANA Cloud Platform Trial

Buy Simple: We’ll protect your investment

Buy Simple.jpg

SAP has simplified licensing terms to allow you to mix and match SAP data management products for deployment in any SAP application scenario – providing greater protection for your SAP database investments as your needs evolve.

 

  • Migration services are provided and compelling offers delivered to lower the risk and cost of a database migration
  • Flexible deployment options are delivered, whether on premise or in the cloud
  • Simpler licensing terms and complete protection for SAP database investments are provided, which also evolves as your business requirements advance

 

Run Simple: We’ll help you migrate

Run Simple.jpg

SAP lowers the risk of migrating to SAP databases — on premise or in the cloud — with SAP services and other compelling offerings.

  • Lower the cost and risk of migration via services credit for database migrations
  • Reduced maintenance costs during the period of migration, so you can fully test the new environment

 

Ready to get started?  Want to learn more?  Please contact your AE or complete the form to have an SAP representative contact you.


Introducing SAP HANA SPS 09: The Platform for All Applications


SAP HANA SPS 09 provides numerous exciting functionalities developed by SAP, as well as additional capabilities provided by SAP co-innovation partners.  This new release renews our commitment to accelerating cloud adoption, providing the most powerful platform for both transactional and analytical applications, creating instant value from big data assets, and enabling co-innovation with our customers and partners.


When it comes to cloud enablement, SAP HANA SPS 09 lets you run multiple independent SAP HANA databases in one single SAP HANA system (SID) and mange as a single unit. We call this functionality Multi-tenant database containers and it is specifically designed to simplify database administration, while maintaining a strong separation and isolation of data, users, and system resources, among individual tenant databases. This innovation allows you to dramatically lower the TCO of your installations–whether that be on-premise or in a cloud environment. Additionally, co-innovations with partners such as IBM, Hitachi and Unisys offers a broad array of secure and professionally managed SAP HANA Enterprise Cloud (HEC) services to accelerate cloud deployments and reduce migration risk.


SAP HANA SPS 09 greatly improves the application development experience with a number of new enhancements to both tools and the core platform. On the tools’ front, SAP HANA Web-based Development Workbench has new SQLScript editor, SQL Script debugger and a Calculation View editor, while SAP HANA Studio has extended its code completion capabilities and offers end-to-end debugging. Additionally, the Visual Application Function Modeler (AFM), allows developers to build reusable application functions based on complex algorithms. The AFM provides prebuilt integration with predictive analysis libraries (PAL), business function libraries (BFL) and R, allowing developers to easily tailor the existing algorithms to specific analytical application needs and then run them in SQLScript.


On the platform front, SAP HANA SPS 09 turbocharges advanced analytics with many new innovations. It exposes to developers a native Graph Engine to efficiently persist and analyze property graphs without duplicating data. This simplifies the analysis of complex relationships and the uncovering of new insights. Additionally, SAP HANA SPS 09 includes Nokia/HERE maps and spatial content to facilitate the development of applications that leverage location-based services. Text mining capabilities have also been added to the platform to help identify relationships among documents. As an example, documents can be stored, indexed and ranked based on a reference document or key terms.


Big data and IoT
open new challenges and opportunities for businesses.  SAP HANA SPS 09 comes packed with new functionalities to help your company to operate in this new reality and create instant value from the variety of your big data assets. The new smart data streaming capability captures, filters, analyzes and takes action on millions of events (streaming data) per second, in real-time.  To optimize available resources, smart data streaming can be configured to pass high value data in SAP HANA for instant analysis and direct remaining data to Hadoop – for historical and trending analysis.  Of course, HANA can analyze data in both HANA and Hadoop at the same time through Smart Data Access, but HANA can also now call a Hadoop Map Reduce job directly from HANA and then bring the result set back to HANA from Hadoop/HDFS for additional analysis.  Another powerful feature is dynamic tiering, which can move warm data from memory to disk – still in columnar format, optimizing in-memory usage for extremely large data volumes.  This will enable customers to achieve great performance on hot data and great price-performance on hot and cold data.  Moreover, smart data integration and smart data quality in SAP HANA SPS09 enables the provisioning  filtering, transformation, cleansing and enrichment of data from multiple sources into SAP HANA, eliminating the need for a separate ETL or replication stages . Pre-built adapters are available for common data sources such as IBM DB2, Oracle, Microsoft SQL Server, OData, Hadoop and Twitter, and an open SDK is also available to build new adapters. In addition, special adapters are provided to consume SAP Business Suite data from databases such as IBM DB2, Oracle and Microsoft SQL Server. Smart data integration is architected for cloud deployment requiring no firewall exceptions to provision data to HANA in cloud.


Openness
is an exciting theme in SAP HANA SPS 09. You can now choose from 400+ SAP HANA configurations supplied by 15 different SAP partners. This is 4 times the number of server configuration options we had in August of 2013. Co-innovations with partners such ad IBM and Hitachi has allowed us to leverage system virtualization at new levels with the support of LPARs for optimal system resource utilization. Scale-up configurations have also expanded with new certified appliances from Cisco, HP and SGI. Additionally relaxed hardware requirements, support for lower cost Intel E5 processors, and new guidelines to utilize existing networking components - part of SAP HANA Tailored Datacenter Integration - provides a new choice to start small and grow your configurations as needed, reducing adoption barriers and improving TCO for SAP HANA.


To learn more about SAP HANA SPS 09, we are delivering resources and hosting a number of Live Expert Sessions to provide a technical view into the new SAP HANA SPS 09 innovations.

 

In the old days before the advent of Big Data, the Cloud and the Internet of Things, managing cash flow was one of the primary concerns of any  enterprise. Today, as we accelerate towards the networked economy, managing the flow of information is becoming just as critical to the success of business.

 

What’s different now?


The big difference between data now and before is that in the past, only humans created and collected data as they went about their lives and their work. Now, with the rise of mobile devices, social media and machines that create and collect data, not only are we experiencing an explosion of data, but humans are no longer the center of the data system, but are just another node in an increasingly autonomous data universe.


Consider the future for supply chains, for example, where vending machines communicate with each other and make decisions on replenishment and maintenance without human interaction.


Or consider that resourceful retailers use hijack alert apps to promote discounts. Using a messaging app with GPS tracking technology, the app recognizes when customers enter a competitor’s store and sends them a notice about their own promotion.

 

Or think about the benefits to hospital clinics when maintenance for medical devices is predicted in advance and any repairs scheduled to minimize inconvenience to patients.

 

These are typical examples of how data, devices, social media and the Internet of Things are rapidly transforming the way we live, work and do  business, raising a number of questions.

 

Are machines making the right decisions? Is the data on our devices secure? Are the new business models invading our privacy as consumers and citizens? How can we manage vast volumes of data coming from different sources? How do we make data useful for business purposes? How can we achieve insight quickly, in real-time?

 

Taking a holistic approach

 

Providing structure and governance is the first step to getting a grip on data.  Information governance is a discipline that includes people, processes, policies, and metrics for the oversight of enterprise information to improve business value. It has always been crucial for any organization but is now more important than ever.

 

The information landscape is becoming increasingly complex with a proliferation of systems and technology to try and make sense of it all. The danger lies in taking a fragmented approach to governing the many types of information from so many different sources. Managing data in silos is not only expensive; it is not sustainable in the long term because the business lacks the transparency to fully understand what is happening, making it more and more difficult to achieve strategic goals.

 

The solution is to simplify the IT landscape and to take a holistic approach to information governance. SAP has the platform and framework to help enterprises do both.

 

Helping enterprises Run Simple

 

At SAP, we are already engineering our solutions for the changing world around us, building everything on the principle of Run Simple.

 

The SAP Data Management portfolio, for example, which runs on the SAP HANA Platform, provides a complete, end to end approach, merging transactional and analytical workloads to be processed on the same data – structured and unstructured – for real time processing. This avoids bottlenecks and data duplication, saving costs,  ensuring compliance and providing transparency.

 

SAP can support your enterprise with high-performance, next-generation information governance services, smarter and seamless access to all types of data, plus democratised insight and information transparency via advanced analytics. And of course, all this can be deployed on premise, in the cloud, or both, backed up with increased security, identity management, and compliance with emerging privacy regulations.

 

Our advice to enterprise decision makers dealing with modern trends like big data, social media, the cloud and Internet of Things is to take action now because managing the flow of information is just as critical to the success of your business as managing cash flow!

 

In case you missed Steve Lucas's keynote at SAP TechEd && dCode in Las Vegas yesterday, I announced the release of a new SAP thought leadership paper on information governance.  This paper covers the substantial business advantages, the emerging challenges, SAP’s recommended framework and holistic platform perspective as well as real-world success stories including how we ourselves achieved €31 million in total benefits over 2 years.


SAP's latest paper on Information Governance can be downloaded here:

The SAP HANA journey towards openness continuous today with the announcement of new cost-optimized entry level servers for SAP HANA targeting price sensitive customers on the low-end of the enterprise market, and adding new large scale-up high-performance computing configurations for SAP HANA on the high-end of the enterprise market.

 

Every customer use case is different. For this reason, SAP introduced the Tailored Datacenter Integration (TDI) deployment model for SAP HANA, providing customers with additional flexibility and enabling significant cost savings for integrating SAP HANA into customers’ data center. SAP HANA TDI delivered on its promise:
Today,
there are more than 400 certified SAP HANA configurations available, providing customers with ultimate choice and flexibility for choosing the best option for their use case and budget needs.

 

Today’s announcement of TDI Phase 3:  SAP HANA on Intel Xeon E5 marks an important milestone on the SAP HANA continuous journey towards openness. The entry-level, 2 socket single node SAP HANA E5 configurations ranging in size from 128GB to 1.5 TB provide the power of SAP HANA real-time business on commodity hardware at the extremely attractive price of ~$10K per server.  SAP also made sure that the hardware procurement process for these boxes is quick and easy, by allowing customers to size and order systems from their preferred hardware vendor in three easy steps. The general availability is planned for November 10th 2014.

 

The rapidly growing partner ecosystem of SAP HANA is another important pillar of its openness strategy. SAP collaborates very closely with its partners to quickly incorporate the latest technology innovations into SAP HANA. The introduction of Intel Xeon Ivy Bridge processors to the market on Feb 18th, followed by Sapphire announcements this summer for the support of SAP HANA on Red Hat, SAP HANA on VMware, and the start of SAP HANA on IBM Power Test and Evaluation program, resulted in a flurry of activities and more than 300 newly certified configurations for SAP HANA. 

 

Today, most hardware partners have certified their SAP HANA system offerings for both SUSE Linux and Red Hat Linux, many offer SAP HANA on VMware configurations in appliance and TDI models, and they constantly look for new and innovative ways to improve their system offerings and lower TCO.  Cisco’s certification of an 8 socket HANA appliance, HP’s introduction of a large 16 sockets, 12TB HANA appliance, and the addition of several new storage vendors offering the latest technologies in hybrid and flash storage, are only a few examples of amazing new innovations being introduced almost daily from SAP HANA partners.

 

The latest addition of SGI to the list of SAP certified hardware appliance partners opens up new opportunities for further extending scalability for single node, large SAP HANA deployments requiring extreme performance and scalability.

 

Customers that prefer to run SAP HANA virtualized will also have more choices with SP09: Hitachi LPAR logical partitioning provides server virtualization at the firmware level to enable dividing of server hardware resources into multiple partitions, resulting in increased utilization and reduced licensing costs.

 

So have it your way!  Whether you are a small business looking to gain new business value from real-time analytics, or a large global enterprise looking to simplify your landscape and maximize IT performance, SAP HANA has the answer for you.  So fasten your seat belt and start your SAP HANA journey now: SAP HANA will take your business to a new high at the speed of thought!

Are you looking to understand SAP HANA innovation adoption process? New customer adoption journey maps will guide you through 5 simple steps to get to value quickly.

 

  1. EXPLORE business cases in your industry and how the solution can help you meet your business needs
  2. IDENTIFY the value of business cases with a tailored design thinking workshop and get a personalized roadmap
  3. TRY solutions with free trial offers and starter editions and make your own decision
  4. DEPLOY with a choice of deployment, on-premise, in the cloud or hybrid
  5. EXPERIENCE to drive the value that SAP HANA and UX innovations can bring to your business

 

11 new customer journey maps for SAP HANA and UX innovations are now available:

 

  1. SAP Business Suite powered by SAP HANA
  2. SAP Business Warehouse  powered by SAP HANA
  3. SAP Simple Finance powered by SAP HANA
  4. SAP Customer Engagement Intelligence powered by SAP HANA
  5. SAP Fraud Management powered by SAP HANA
  6. SAP Demand Signal Management powered by SAP HANA
  7. SAP Sales and Operations Planning powered by SAP HANA
  8. SAP HANA Enterprise Cloud
  9. SAP HANA Cloud Platform
  10. SAP Fiori UX
  11. SAP Screen Personas

 

SAP HANA now has over 4,000 customers globally, over 1,000 use cases listed on saphana.com and hundreds of live customers that are a testament of great progress and innovation that can be obtainable with SAP HANA. You can start on a path of innovation, acceleration, and radical simplification now – follow the 5 simple steps on the map and start driving value today.

In the first part I discussed the system landscape for the suite on HANA looking at large implementations. A crucial feature was the split of data into actual and historical data and the fact that unused tables can be purged from main memory after a certain period of time.

 

The smaller the data footprint of actual data becomes the faster the scan and filter operations in HANA will run. Now, this might not have any real impact on response times for the user anymore, because they are fast already, but it reduces the overall cpu consumption on the database server. We simply can run more transactions on the same system. The same is true for the move to sERP, where all redundant tables and the aggregate tables are being replaced by SQL views for transactional data entry. Since the data entry transactions become much slimmer, the cpu times in the database get significantly reduced. This is important for taking in sales orders coming from other systems, direct order entry, warehouse movements, automatic replenishment of products, manufacturing purchase orders, processing of incoming and out going payments or many more. All of those can be time critical processes in a company. Yes, the display of an account balance e.g. consumes  now a bit more cpu, but these transactions are pretty rare in comparison to the gains in data entry. And remember HANA doesn't need the large number of database indices any more and uses the attribute vectors of the tables instead but increases the scanning of attribute vectors (columns) instead.

 

Many reporting functions migrated away from the transactional system over the last decade. They will come back now because they can. The system load shifts clearly in the direction of more and more sequential and read only processing. The vast majority of the database activities will become read only and access only the actual data. The other main cost factor for the hardware is main memory. We want to keep the data mainly in memory but there is a lot of data unused or at least rarely used. As you may now know unused columns are not taking any space at all (in contrast to conventional row oriented databases). With the purging algorithm we can further reduce the data footprint in memory. Tables or parts of tables, which not being accessed for a certain period of time will be dropped from memory. As explained in the first part of this blog historical data can not change any more and as such don't have to be considered with regards to cache coherence.

 

When the data footprint plus workspace becomes much smaller than the typical blades used in the cloud (6, 3, 1 terabytes) multiple systems can run on the same blade using virtualization. The tradeoff between capacity and performance shifts towards capacity (main costs) and with even smaller systems eventually towards multi tenancy, since the performance is more than sufficient for the users. Test and development systems will always be virtualized and can vary in size by demand (system start).

 

I hope as soon as there is enough experience in the cloud with sERP (or sFIN) SAP will make these features also available to non cloud implementations of the suite on HANA. The non HANA based ERP system operate differently. The transactional performance is mainly achieved by using a large number of database indices and the database tries to find the data in it's caches. The split in actual and historical data would only help for really sequential processes reading all data of a given table. 

Learn, Share and Grow with SAP HANA : SAP HANA Operation Expert Summit in two locations.


  • November 20, 2014, in Newtown Square, PA
  • December 04, 2014 in Palo Alto, CA


As an IT expert working with the SAP HANA platform, we invite you to be part of our inner circle at an exclusive event: the SAP HANA Operation Expert Summit.

273307_l_srgb_s_gl.jpg

 

Don't expect your standard summit with speakers and coffee breaks. This is an interactive occasion that welcomes your full participation.


Panel discussions
and breakout sessions offer unique opportunities to share your experience and ideas with us.

We want to hear what you think of SAP HANA operation.


We want to know:

  • What are your pain points or challenges?
  • What advice, tips, or tricks do you have for other users?
  • What features would you like to see in the future?


In addition, speed-networking sessions with SAP experts from the SAP HANA development organization will offer you advice on how to best operate SAP HANA — from planning and building, all the way to running, giving you knowledge and insights you can start using immediately.
Space is limited!


Register here
for Newtown Square. Get the full AGENDA here

Register here for Palo Alto. Get the full AGENDA here.


Do you know another SAP HANA operation specialist in your company? Don't hesitate to forward this message. Let's keep making progress!


We look forward to welcoming you in Newtown Square, PA, on Nov
ember 20, 2014 and in Palo Alto, on December 04, 2014.

SAP HANA Marketplace (http://marketplace.saphana.com) is a key part of SAP’s cloud vision (The Cloud company Powered by HANA) and execution. It is the online commerce ecosystem for all things related to the SAP HANA platform. Here, customers can discover, try, and buy solutions based on SAP HANA across 25 industries and 12 lines of business – and partners gain a low-touch, low-cost channel to connect with SAP customers and commercialize their innovations. 

 

Also, the SAP HANA Cloud Platform (HCP), the in-memory Platform-as-a-Service offering from SAP can ONLY be purchased at the SAP HANA Marketplace. HCP enables customers and developers to build, extend, and run applications on SAP HANA in the cloud and is available in many flexible subscription models for apps, database, and infrastructure from 32GB to 1TB+  memory capacities. With few clicks and within minutes you get an instant access to the full power of SAP HANA

 

The details of our presence can be found in this blog: http://scn.sap.com/community/cloud-platform/blog/2014/09/23/come-experience-the-sap-hana-marketplace-at-techeddcode-2014

With recent announcements on SAP HANA reaching headlines, we thought, what better way to get your SAP HANA questions answered than during a tweetchat! We held a #SAPChat last week that had over 200 tweets within an hour. Our participants Steve Lucas, Andy Sitison and John Appleby were actively tweeting away and responding to questions that came through our stream.  If you were not able to participate or follow along the stream, no worries, I’ve highlighted a few key topics below:


1. SAP TechEd && d-code

SAP TechEd && d-code Las Vegas is just around the corner. Get all the information you need here, from searching for sessions to attend live or online: http://www.sapdcode.com/2014/usa/home.htm, to building your personalized agenda here: http://sessioncatalog.sapevents.com/go/agendabuilder.home/?l=84.


Q: @tpowlas: @D_Sieber @SAPMentors @nstevenlucas sure, any sneak previews to share for #SAPtd to share? #SAPHANA #SAPChat

  • @applebyj: @tpowlas @D_Sieber @SAPMentors @nstevenlucas I expect we will hear a lot about the upcoming #SAPHANA SP09 release #sapchat
  • @applebyj: @tpowlas @D_Sieber @SAPMentors @nstevenlucas I hear improvements in performance, scale-up support, core SQL, data tiering #sapchat


Q: @ASitison: So TechED is just around the corner, what should the masses keep an Eye on? #SAPChat

  • @tweetsinha: @ASitison > customer momentum in 1000s, openness, truth, platform, analytics on HANA, all social streams on HANA, IOT on HANA... #sapchat

 

2. SAP HANA Customers

Customers are important to us and we value their comments and concerns. Many of our customers have shared their stories and experiences with us and you can view them through our SAP HANA Customer Reference eBooks: http://www.saphana.com/community/learn/customer-reference-ebooks or hear them share their stories through these videos: http://www.saphana.com/community/learn/customer-stories/


Q: @Rafikul_Hussain: How many live customer for BW & ERP on HANA in production environment #SAPHANA #SAPChat

  • @tweetsinha : @Rafikul_Hussain > suite is 160 live, all are high availability


Q: @Rafikul_Hussain: How many customer using ERP on HANA in High availability #SAPHANA #SAPChat

  • @applebyj: @Rafikul_Hussain I'm working on right now with over 100TB of HANA. HA/DR, multiple datacenters. It's awesome so far. #sapchat
  • @applebyj: @Rafikul_Hussain But to be honest... I would think of the 350 live SoH customers. You don't run SoH without HA/DR :-) #sapchat
  • @applebyj: @Rafikul_Hussain Sorry correction for SoH 160 live. #sapchat
  • @nstevenlucas: RT @applebyj: @Rafikul_Hussain ALL HANA customers have HA/DR scenarios - you dont run suite of BW w/o that  #sapchat


Q: @satdesai: @D_Sieber @kensaul @nstevenlucas @ASitison @HenrikWagner @applebyj How many customers are in production on SAP HANA #saphana #sapchat

  • @applebyj: @satdesai @D_Sieber @kensaul @nstevenlucas @ASitison @HenrikWagner Over 1400 total prod customers. #sapchat


Q: @Beyanet_tr: #SAPChat #SAPHANA any customer success story about BW on HANA and HANA Live Rapid Deployment on the same appliance?

  • @applebyj: @Beyanet_tr There are some doing this. More common to separate it TBH for TCO reasons. #sapchat


Q: @tweetsinha: #sapchat @nstevenlucas > as a customer why should I not flip the switch to 12c vs. #HANA?

  • @nstevenlucas: @tweetsinha: #sapchat @nstevenlucas > as cust. why should I NOT flip the switch to 12c vs. #HANA? SL: that propagates WRONG architecture
  • @nstevenlucas: RT @tweetsinha: #sapchat why #HANA vs. Orcl 12c? HANA is about less data modeling @applebyj
  • @nstevenlucas @tweetsinha @nstevenlucas My answer would be - it only works for simple scenarios. Real-world scenarios break. #sapchat with better performance...Orcl 12c. is opposite.

 

3. The future of SAP HANA

There has been tremendous progress around SAP HANA throughout the years, and Steve Lucas explains why SAP HANA is the future: http://www.saphana.com/community/blogs/blog/2014/09/28/sap-hana-is-the-future 


Q: @naveenketha: Whats #SAP future plans to increase #HANA  adoption ? #SAPChat

  • @applebyj: @naveenketha I think a focus on helping customers build the business case, plus more and more use cases. @nstevenlucas


Q: @applebyj: What's the most important #saphana trend for 2015? @nstevenlucas #sapchat

  • @nstevenlucas: RT @applebyj: What's the most important #saphana trend for 2015? @nstevenlucas #sapchat SL:SoH is PROVEN. need cust. to move OFF oracle


Q: @tweetsinha: #sapchat @nstevenlucas > why do you think HCP is a big deal?

  • @nstevenlucas: RT @tweetsinha: #sapchat @nstevenlucas > why do you think HCP is a big deal? SL: reduces ext & cust of SAP on prem costs by upwards of 100x

 

I hope this tweet chat has shed light onto some of the burning questions you’ve had and I want to thank everyone who participated in asking as well as answering all the questions.

 

This tweet chat may be over, but it certainly won’t be our last! Are there more questions you have around SAP HANA? Feel free to leave a comment below with your questions and we will be sure to get you an answer or set up another tweet chat.

News Archive

SAP HANA News Archive

Filter Blog

By author:
By date:
By tag:

Contact Us

SAP Experts are here to help.

Ask SAP HANA

Our website has a rich support section designed to help you get the highest quality answers quickly and easily. Please take a look at the options below to get you answers from the SAP experts.

If you have technical questions:

If you're looking for training materials or have training questions:

If you have certification questions:

Χ