Elit Squad

What’s New in SAP HANA Cloud in January 2023

 

We will cover the

  • SAP HANA Cloud tooling overall
  • SAP HANA database as one part of SAP HANA Cloud
  • Native storage extensions and innovations
  • Calculation view modelling in SAP HANA Cloud
  • SAP HANA Cloud data lake.

 

The renovated SAP HANA Cloud Central

SAP HANA Cloud Central, the tool to manage multiple SAP HANA Cloud instances, has been integrated with the SAP HANA cockpit. This means that SAP HANA Cloud Central will eventually be the only tool that you will be using to manage your SAP HANA Cloud instances.

The behaviour of opening the other two tools, SAP HANA Cockpit and Database Explorer, remains the same in SAP HANA Cloud Central, so if you still want to call the tools using the old behaviour, you still can.

You can use your existing knowledge of using the SAP HANA cockpit to administer individual SAP HANA Cloud instances, because it has been moved into SAP HANA Cloud Central.

New features have been introduced in SAP HANA Cloud Central, including a command palette that allows you to search the list of tasks that operators can perform on specific instances.

The compute metrics are now displayed in the header bar, and you can see the availability zones and backup size.

You can change the timeframe of the cards in SAP HANA Cloud Central to see the KPI on the card. This allows you to do performance analysis right from the central location.

There is now a usage monitor, which replaces the performance monitor, which allows you to visualize the health metrics over a specific period of time.

The alerts application has been moved to SAP HANA Cloud Central and is now displayed at a landscape level. You can also navigate to the alerts from other applications in the tool.

SAP HANA Cloud Central version that is in QRC4 shows you a global view of your alerts, and you can change the time range to the last seven days to see individual alerts for an instance.

Instead of opening the cockpit or data explorer from the access menu, you can now click on the instance that you want to manage and see the individual monitoring and administration tasks appear on a panel.

The SAP HANA cockpit has a number of cards, and you can see the information displayed on those cards. When you click on a specific card, you get to an underlying application, which shows a similar behavior as before.

The management interface for SAP HANA Cloud has been changed. There are now a number of tabs that replace some of the cards, and at the top you can still perform tasks such as changing your authentication or copying your instance ID and configuration.

If I open the command palette, I can see a very extensive list of commands that I can perform for specific instances.

At the very top of the instance page, you can see a navigation bar that allows you to quickly navigate to another application or view related applications.

The new version of SAP HANA Cloud Central has some functionality moved to the right-hand side, but if you still want to use the old behavior of the tools, you can do so by going to the Actions menu.

 
Innovations in SAP HANA database explorer

SAP HANA provides the ability to store and query JSON data, and you can now generate SQL statements for create, select, and insert. The new View JSON Context menu also provides a convenient way to browse the JSON contents of the collection.

The statement library now supports the SAP HANA data lake relational engine, and can be accessed via context menu on an instance, as well as being able to mass import from within the statement library.

Within the SQL console, you can now run data lake relational engine statements in the background or against multiple instances. The results of the execution can be downloaded.

If you’re doing multiple inserts or multiple updates, you may wish to temporarily adjust the auto commit setting.

The trace configuration is available through the context menu of a data-like relational engine instance, and allows you to configure and turn on and off auditing and query plans.

SAP HANA Database Explorer now includes a few new features and a renovated SAP HANA Cloud Central. The new tooling brings up new great possibilities for our users.

 
Native Storage Extension

Let’s take a closer look at the native storage extension, which is an integrated part of the SAP HANA database and provides additional storage capacity on disk for the SAP HANA database.

Before we get to the new feature of dynamic aging, let me give you a quick introduction into a very common pattern in the SAP HANA Cloud database to use a native storage extension. This pattern is called range partitioning.

SAP HANA Cloud provides dynamic range partitioning and dynamic aging. Dynamic range partitioning allows you to add dynamically partitions to range partition tables and dynamic aging allows you to automatically manage at which point in time all the partitions can be moved to NSE.

SAP HANA Cloud will create and add a new range partition for a table whenever you add a new record that exceeds the current range partition.

A background job in SAP HANA adds a new partition, and then another background job moves an existing record to NSE, if the difference between the new record and an existing record is bigger than 100.

SAP HANA supports dynamic aging for integer and date volumes, and you can enable, disable, and modify it. The dynamic aging is controlled by a background job every 20 minutes.

In written form, you can enable dynamic aging, modify the ageing, trigger dynamic aging manually, and disable dynamic aging.

We have introduced a new column in the monitoring views of the buffer cache statistics, IO_Read_Size, which gives you an indication of the amount of data that has been read from the disk to satisfy a query or to satisfy queries in your system that run on NSE data.

I’ll create a table T, and I’ll insert a value to check what happens with the partitioning scheme. You see nothing happened, because the background job didn’t run, you know, the dynamic aging runs every 20 minutes, and the interval checker for dynamic range partitioning runs every 15 minutes.

There is no aging, but a new interval has to be created because the difference between 15 and 180 is exceeding the distance property 100. So I will manually trigger the background job to check the interval, and now the load unit is paged.

The IO statistic in the buffer cache is increasing, and you can see it increasing in intervals. This will be part of NSE with the next release of SAP HANA Cloud Central.

 
 
Calculation View Modelling

The first feature I would like to highlight in this call is about filter mapping. You can decide now whether a snapshot should be created during deployment and you can also decide what should happen across deployments.

SAP also introduced undo and redo buttons, so you can go back to certain events in time without having to press this button so often.

If you go to this GitHub project of the QRC4, then you will find some examples of the new features. You can test out these examples by cloning from Git and then going into Business Application Studio and using the examples.

Within each folder you will find an info that you can make it easier to read. If you do a query with a filter mapping, the filter is applied to both tables at the same time, which reduces the output size.

You can decide whether you want to see join and mapping definitions and create a filter for joins.

The next feature is about retaining snapshots. Here we have one query defined, and we can change it, and we can say that we want to create a snapshot when deploying and we want to keep the snapshot also .

The SAP HANA database within SAP HANA Cloud contains more innovations and enhancements than we’re introducing today, but we will later on share a reference where you can find more details and insights.

SAP HANA Cloud, data lake: Character-length semantics for VARCHAR fields & Support for schemas that are compatible with SAP HANA

The VARCHAR data type was introduced in SAP HANA Cloud, data lake relational engine in particular, and was implemented using character length semantics. In contrast, the VARCHAR data type was introduced in SAP HANA Relational Engine many decades ago, using byte length semantics.

In the QRC 4 release, SAP will be introducing character-length semantics for the VARCHAR data type in the relational engine, so you don’t have to worry about tripling the nominal length of your VARCHAR fields in order to get the same length in relational engine as SAP HANA.

The second one is that SAP have introduced a create schema statement for the relational engine, so you can now create multiple schemas owned by a single database user.

To create a schema in the relational engine, you need the create database schema system privilege. This is much more restrictive than the privileges required to create a user.

The relational container is a tenant concept in the relational engine, roughly equivalent to HDI containers, and the idea of a isolated schema, which nobody else has permissions on, including the admin users, does transfer to the relational containers.

One Response

Leave a Reply

Your email address will not be published. Required fields are marked *