Microsoft has announced a new in-memory OLTP product code-named Hekaton that will be available in 2014-2015. In addition they have introduced some new columnar capabilities in SQL Server 2012. The industry analysts and Microsoft themselves are positioning these products against HANA.
There are lots of technical reasons to believe that HANA is a far superior product today than what Microsoft has announced will be available a few years out. The SQL Server products require a batch process to build the index that drives the new columnar features, the OLTP product does only OLTP so a redundant copy of the data is required. There are odd SQL limitations that require hand-tuning to get it all to work and so on. But in this blog I'd like to focus on a bigger, more strategic, more important distinction: the Microsoft products today and upcoming are so old school, so 1990's, maybe even 1980's.
For the last 30 years the DBA has been the guru making the business hum by tuning query syntax, applying indexes, building cubes, and moving data from OLTP databases to operational data stores to data warehouse databases to data marts and back. They were the consummate engineers working under serious constraints imposed by the technology. Every time a new business function came on-board the DBA was called in to re-architect the eco-system to somehow make it all work. Every query had to be approved so that nothing broke the fragile system... and ad hoc queries were often forbidden or only run in exceptional cases. If you wanted to perform some mathematical analysis on the data you needed to export the data to yet another "analytics" mart or SAS data farm. These disparate systems became referred to as "stovepipes" and stovepipes propagated until the landscape looked like Mary Poppins' London.
For the BI/Data Warehouse part of the eco-system the shared-nothing architecture created an environment where you could consolidate some of the stovepipes into a single house with lots of fireplaces (Ok, I'll give up on the stovepipe metaphor here). Shared-nothing gave us the basis for "big data" databases. Shared-nothing lets us grow a cluster as the business needs grow whether that means more data, more users, or more sophisticated queries.
The starting point for HANA is based on the recognition that the current and upcoming hardware technologies are capable of solving for all of this in a single database instance if only the database was re-written to fully utilize the hardware. HANA will support OLTP, OLAP, and Analytics workloads simultaneously. The starting point for HANA is the idea that there should be no tuning required, every query should run fast without an index and without pre-aggregation. The starting point for HANA is that replicating data all over the enterprise should be the exception not the rule. The start is to remove the constraints.
You should see the numbers coming from our Petabyte Cloud project (here). We emulated 5600 concurrent users running a mixed workload against a 1000TB database and the complex queries returned in under 2 seconds. SQL Server users are excited when complex queries complete in under 6 seconds on a tiny system after running an index-build that takes an hour (here). And this was after tweaking the SQL syntax with a deep SQL tuning expert in the loop.
The Microsoft announcements are all old-school. Six second queries don't work when your users are querying from mobile devices. Single-node database technology does not work in the age of big data. Query tuning will never keep up with the needs of the end user community. Microsoft also announced a new version of their shared-nothing SQL Server and they announced a Hadoop interface. So their view of the world requires an in-memory OLTP database with the data moving to a shared-nothing data warehouse (they were about 25 years late with shared-nothing) and then again to a data mart where a columnar index will be built to enable BI and analytics. It looks like 1995 in 2015.
Microsoft points out (see here) that one advantage they have is that SQL Server is "the data platform that customers are already using". Exactly! This 1995 architecture was designed for single-core x486 systems with 256MB of RAM. If you want to keep up it is time to move on. SAP has.