How much space do I need in a BW on HANA environment compared to a BW on a traditional RDBMS environment?
How do I perform proper sizing for BW on HANA?
In general a customer could expect an average compression factor of 4-8, although this can of course vary in every case. Many aspects can influence the compression factor, especially the existing BW architecture.
The compression varies a lot by individual customer, as for example index tables, aggregates tables and DSO change log tables are eliminated, of which every customer has a different degree of deployment.
Red Bull, as an early adopter, experienced a 80% reduced DB size after a migration from BW on RDBMS to BW on HANA.
Note 1736976 describes in detail all information about the sizing of a BW on HANA system. Note 1637145 describes how to use DB scripts in case you are not able to use the recommended ABAP report as described in note 1736976.
Please subscribe to the OSS notes to receive regular updates.
SAP Quicksizer: http://service.sap.com/quicksizer
For more detailed information there are two webinars available including recording and presentation that give an overview of all BW on HANA sizing tools and a deep dive that discusses in detail the main sizing tool for BW on HANA as described in note 1736976.
It is very beneficial to make sure the BW system is cleaned up before the migration and corresponding house keeping tasks are executed frequently. More information about clean-up / house keeping tasks can be found here: http://www.saphana.com/docs/DOC-2770
There is a new blog that describes the entire sizing process and considerations very well.
There is also a very detailed document created by SAP AGS that describes many details around HANA sizing.
Is there a size limit for a BW on HANA migration?
No. SAP provided a size limit only during the ramp-up phase. Since BW on HANA is generally available there is no size limit for a migration. If a customer runs into a situation where they require a larger HANA instance than available in the Product Availability Matrix (PAM) on the service marketplace, they can reach out to SAP to get a customer specific certification.
What is the impact on the database size when converting a non-Unicode compliant DB to a Unicode compliant DB as prerequisite for migrating BW to BW on HANA?
The sizing scripts attached to note 1637145 assume a Unicode enabled source database. If the scripts are executed on a non-Unicode database you can add a small uplift (since BW operates mainly on numeric data, an uplift of no more than 10% should be enough). Please see the presentation in the attachment of SAP Note 1637145.
The ABAP sizing report provided by note 1736976 determines already if the source database is unicode or non-Unicode. No further adjustments required in that case.
What are the biggest SAP HANA hardware configurations available?
All available SAP certified HANA hardware configurations can be looked up in the HANA PAM (Product Availability Matrix). In addition SAP will also certify as required by customer scenarios. Please check with your hardware vendors and SAP Account Executive in case you are interested in any other configuration than listed in the HANA PAM.
The speed of development of HANA hardware is impressive, considering that in 2010 the biggest HANA hardware configuration available was 2TB of memory with 512GB per node. At SAPPHIRE 2012, hardware partners presented a 16 node HANA system with 16TB Memory with 1TB per node. SAP showcased at SAPPHIRE 2012 a custom HANA system running 4000 cores with 100TB of Memory. As of September 2013 there are 56 node systems available with both 512 GB and 1 TB nodes.
What is the best way to do an initial sizing for a completely new BW on HANA system if the customer doesn't have a BW system yet?
The best tool to use here is the Quicksizer. The easiest way is to take the corresponding tables in the source systems and enter that information in the DSO section of the Quicksizer's BW on HANA questionnaire. It is a general best practice to multiply the estimated rows per DSO with factor 2x-3x to estimate accordingly for additional persistent data layers that are part of the LSA++ design. InfoCubes don't need to be entered anymore in this case as this is covered by the high level sizing estimate as described before.