For Sales Enquiry : 91084 68825, 63603 33901

We understand the business criticality of our customer's business & data, thats why we have very carefully designed our ERP servers for better peformance and integerity & security of the data.

Memory vCPUs Transfer NVMe SSD Scaling
32 GB 8 Core 6 TB
(FREE)
100 GB
32 GB 8 Core 6 TB
(FREE)
200 GB
64 GB 16 Core 7 TB
(FREE)
200 GB
64 GB 16 Core 7 TB
(FREE)
400 GB
128 GB 32 Core 8 TB
(FREE)
400 GB
128 GB 32 Core 8 TB
(FREE)
800 GB
160 GB 32 Core 9 TB
(FREE)
500 GB
160 GB 32 Core 9 TB
(FREE)
1000 GB
256 GB 32 Core 9 TB
(FREE)
1000 GB
512 GB 32 Core 9 TB
(FREE)
1000 GB
1000 GB 32 Core 9 TB
(FREE)
1000 GB




Agility Without Sacrificing Control
Private Cloud - High Performance NVMe SSD Disk
High Performance NVMe SSD Disk

Your application’s speed & conv ratio is defined by the speed of your disk IO. That’s why we use the industry’s best SSDs.

Private Cloud - Cloud Management Platform
Cloud Management Platform

Each cloud setup comes with CMP (Cloud Management Platform), Using which you can easily manage your cloud infrastructure.

Private Cloud - Auto Scale Up/Down
Auto Scale Up/Down

Spin off VMs based on server load/response time/traffic/RAM/CPU use.

Private Cloud - Backup/Restore/Disaster Recovery
Backup/Restore/Disaster Recovery

We can set up your DR at an affordable price, with optional backup & restore.

Private Cloud - Dedicated Bandwidth
Dedicated Bandwidth

Each private cloud comes with dedicated bandwidth of minimum 50 mbps, Scalable up to 1000 mbps,will be charged separately

Thank you, e-mail sent successfully. We will get back to you shortly. ⚠ Invalid Captcha

Contact Us – SAP HANA

Contact Us : 98866 52578, 63745 17734
Captcha
Affordable pricing
Affordable Price @ 55% Offer

We understand our partner needs and provide good margin for our partners with discount.

Reliability and Security
Reliability and Security

Your ERP data is written on two disks at the same time for high availability & reliabiltiy and also we provide 2 backups at any given point of time

Indian Datacenter
Indian Datacenter

We host your application in Indian data center as per central governments data localisation policy

Support
24/7 Support

You can contact us through ticket system 24/7 and we provide phone support and remoste assistance also

High Performace
High Performace, Else Free

We assure you the best performance of your servers, If doesnt meet your expectation, We can provide you for Free

DDos Protection & Auto Scale
DDos Protection & Auto Scale

We protect your application against DDOS & also you can downgrade or upgrade your server without downtime easily

More memory (RAM) is required for SAP HANA (High-performance analytics Appliance) to analyze enormous volumes of data in the RAM and deliver speedier results. Our SAP HANA servers are built with extremely fast nvme drives and high-speed RAM.

Only the Linux operating system, which has been optimized for performance, is compatible with the recently released SAP S/4HANA.

Since SAP HANA processes compressed data in RAM memory rather than storing and retrieving it from a hard drive like other databases do, memory is extremely important.

Your SAP HANA Server “Sizing”

Calculating a SAP System’s hardware needs, such as its physical memory, CPU power, and I/O capacity, is referred to as sizing. Determining the necessary dimensions guarantees that customers purchase hardware in accordance with their business needs, as well as lower costs and lower TCO.

SAP HANA’s three primary KPIs for sizing are
  • Main memory (RAM) space
  • CPU processing performance
  • Disk size
Main Memory Sizing for SAP HANA:

There are two types of RAM requirements for SAP HANA: static and dynamic.

Static RAM Requirement: The quantity of primary memory utilised to store the table data is referred to as the static RAM requirement. HANA’s memory sizing is determined by the amount of data that is to be stored in memory.

Dynamic RAM Requirement: When new data is loaded or queries are run, additional main memory is needed for objects that are created dynamically. Since SAP advises allocating the same amount of memory for static and dynamic objects, the static RAM is multiplied by two to determine the total RAM.

1. Determine the amount of Uncompressed Data to be loaded in HANA – Data Footprint:

Identify the data that has to be transferred to the SAP HANA database (either through replication or extraction). This must be done at the table level because clients often only choose a portion of the data from their ERP or CRM database.

Since the sizing process is based on uncompressed source data size, it must also be taken into account if the source database employs compression. Database tools can be used to gather the data needed for this stage. A script that supports this operation for several database systems, including DB2 LUW and Oracle, is contained in SAP Note 1514966.

The Source Data Footprint refers to the total size of all the tables in the source database that are currently used to store the necessary data (without DB indexes).

2. HANA’s compression factor

The anticipated compression in a side-by-side HANA situation was 1:5. The compression ratio rose to 1:7 thanks to improved compression techniques.

Customers are circulating examples of factors exceeding 1:50 that have been attained, but there are also far greater figures. Depending on how the data is distributed, SAP HANA can achieve a variety of compression ratios.

3.Calculation of Static RAM Size

The amount of RAM needed to hold the data in the SAP HANA database is referred to as the static RAM size.

Considering a 7 compression factor:

RAM (Static) = Data Footprint / 7


Calculation recommendation for Dynamic RAM Size

When new data is loaded or queries are run, dynamic RAM is used to provide the additional main memory needed for objects that are formed dynamically. SAP advises maintaining the same amount of static and dynamic RAM.

RAM (Dynamic) = RAM (Static)


5. Total RAM Size Calculation

RAM (total) = RAM (dynamic) + RAM (static)
= Data Footprint x 2/7


The overall RAM allocation should be rounded to the next T-shirt size. For instance, if the entire amount of RAM is 400 GB, a M should be chosen for the T-shirt size.

Disk Sizing:

Even though SAP HANA is an in-memory database, It still needs disc storage space. for example, to keep database information if the system shuts down unintentionally or as a result of a power outage.

Disk sizing can be divided into two categories:

  • Persistence Layer (also called Data Volume)
  • Disk Log (also called Log Volume)

Persistence Layer (Data Volume): To ensure a complete copy of the business data on disc in Data Volume, data updates in the database are periodically copied to disc in Data Volume(Persistence Layer).

Based on the overall amount of RAM, the capacity for this storage is determined:

Disk (Persistence) = 4 x RAM (Total)


Disk Log (Log Volume): To guarantee that changes are persistent and the database may be returned to the latest committed state upon a restart, Log Volume saves log files. The Log Volume must be at least as large as the main RAM of the SAP HANA server.

DISK (Log) = 1 x RAM (Total)


Note: There is no need to carry out this disc sizing since the authorised hardware configurations already take these guidelines into account. We still mention it here, though, for your benefit.

CPU Sizing:

When a large number of users are anticipated to operate on a little amount of data, a CPU sizing must be conducted in addition to the memory sizing. Select the T-shirt size that meets the needs for both the CPU and RAM.

User-based CPU sizing is used. 300 SAPS must be supported by the SAP HANA system for each continuously active user. Depending on the server model, the servers utilized for the IBM Systems Solution for SAP HANA support 60 to 65 concurrently active users per CPU.

CPU : 300 SAPS / Active User


Multiple servers with the scale out option

Because SAP HANA is scalable, you may connect several physical servers or cloud servers into a single logical database instance and obtain linear performance improvements as you add more servers to the SAP HANA cluster. To configure SAP HANA or SAP S/4HANA Database Server clusters in accordance with best practises, including SAP HANA system replication, use a YaST wizard.

Scale-out refers to the integration of numerous independent nodes/computers into a single system. Several suppliers have currently received SAP HANA certification for multi-node scale out. Literally, you only need to add another node or server to the system to get an exponential rise in performance along with the extra memory.

Superior Availability

Complete high availability is built into SAP HANA. It provides recovery strategies for problems and software issues as well as catastrophes that force the shutdown of a complete data center. A collection of methods, engineering procedures, and design ideas that help achieve business continuity are together referred to as high availability.

Fault tolerance and the capacity to quickly restart operations after a system outage with little business loss are two ways that high availability is achieved

Cold standby hosts are supported by SAP HANA, which means that a standby host is maintained available in case a failover problem develops during production. Some servers in a distributed system are classified as worker hosts, while others operate as standby hosts. Importantly, you can give each group a number of standby hosts. As an alternative, you can combine several servers to develop a separate standby host for each group.

Database processing does not take place on a standby host. On the standby host, all database processes are active, but they are idle and do not enable SQL connections.

Disaster Recovery

To achieve optimal efficiency, the SAP HANA database keeps the majority of its data in memory, but it also uses persistent storage as a backup in case something goes wrong.

Data is automatically saved from memory to disc at predetermined save-points throughout typical database operations. All data modifications are also noted in the log. Immediately following each committed database transaction, the log is stored from memory to SSD. The database may be restarted following a power outage in the same manner as a disk-based database, and it restores its previous consistent state by replaying the log since the previous save-point.

Save-points and log writing can shield your data against power outages, but they are useless if the persistent storage is broken. Backups are necessary to guard against data loss due to disc failures. Backups store the data and log area contents in several places. These backups are made while the database is active so that users can carry on as usual. The backups have a very small effect on system performance.

The work of the services on the failed server is transferred to the services operating on the standby host if the SAP HANA system detects a failover situation. According to the system’s failover strategy, the failed volume and all associated tables are reallocated and put into memory. The servers’ entire persistence is saved on a shared disc, therefore this reassignment can be done without moving any data. Every server has access to the same discs through shared storage, where data and logs are kept.

The system waits a short while to see if the service can be restarted before performing a failover. The status is “Waiting” while this is happening. This process could take a minute. The entire failover detection and loading procedure could take several minutes.

X
Quick Contact
Captcha