Wednesday 13 April 2016

Introducing SAP HANA Vora1.2

SAP HANA Vora 1.2 was released recently and with this new version we have added several new features to the product. Some of the key ones I want to highlight in this blog are

  • Support for MapR Hadoop distro
  • Introducing new “OLAP” modeler to build hierarchical data models on Vora data
  • Discovery service using open source Consul – to register Vora services automatically
  • New Catalog to replace Zookeper as metadatstore
  • Native persistency for metadata catalog using Distributed shared log
  • Thriftserver for client access thru jdbc-spark connectivity

The new installer for Vora in ver1.2 extends the simplified installer to be able to use Hadoop Management tools like MapR Control System to deploy Vora on all the Hadoop/Spark nodes. This is an addition to what was provided in ver1.0 for Cloudera Manager and Ambari admin tools.

Vora Modeler provides a rich UI to interact with data stored in Hadoop/HDFS, parquet, orc and S3 files by either using the SQL editor or the Data Browser. Once you have the Vora tables in place you can create “olap” models to build dimensional data structures on this data.

At the core of Vora we are looking to enable the distributed computing at scale when working with data both in SAP HANA and Hadoop/Spark environments. By pushing down processing of different algorithms to where the data is and by reducing the data movement between the two data platforms we deliver fast query processing and performance for extremely large volumes of data. We have also introduced new features like distributed partitions and co-located joins to achieve these performance optimizations.

HANA Vora went GA early March and we are seeing several customer use cases that enables BigData Analytics and IoT scenarios. If you are at ASUG/Sapphire during May2016, stop by to hear about real life customers discuss their implementations and gain insights from these technologies.

Vora Developer edition has been updated to ver1.2, you can access it from here


No comments:

Post a Comment