We provide SAP Certified HPE servers. We use HPE ProLiant DL380 Gen10 plus & HPE ProLiant DL385 Gen10 plus servers which are certified for SAP applications.Refer: https://techlibrary.hpe.com/us/en/enterprise/servers/supportmatrix/hplinuxcert-sap.aspx
|32 GB||8 Core||6 TB
|32 GB||8 Core||6 TB
|64 GB||16 Core||7 TB
|64 GB||16 Core||7 TB
|128 GB||32 Core||8 TB
|128 GB||32 Core||8 TB
|160 GB||32 Core||9 TB
|160 GB||32 Core||9 TB
|256 GB||32 Core||9 TB
|512 GB||32 Core||9 TB
|1000 GB||32 Core||9 TB
Your application’s speed & conv ratio is defined by the speed of your disk IO. That’s why we use the industry’s best SSDs.
Each cloud setup comes with CMP (Cloud Management Platform), Using which you can easily manage your cloud infrastructure.
Spin off VMs based on server load/response time/traffic/RAM/CPU use.
We can set up your DR at an affordable price, with optional backup & restore.
Each private cloud comes with dedicated bandwidth of minimum 50 mbps, Scalable up to 1000 mbps,will be charged separately
We understand our partner needs and provide good margin for our partners with discount.
Your ERP data is written on two disks at the same time for high availability & reliabiltiy and also we provide 2 backups at any given point of time
We host your application in Indian data center as per central governments data localisation policy
You can contact us through ticket system 24/7 and we provide phone support and remoste assistance also
We assure you the best performance of your servers, If doesnt meet your expectation, We can provide you for Free
We protect your application against DDOS & also you can downgrade or upgrade your server without downtime easily
More memory (RAM) is required for SAP HANA (High-performance analytics Appliance) to analyze enormous volumes of data in the RAM and deliver speedier results. Our SAP HANA servers are built with extremely fast nvme drives and high-speed RAM.
Only the Linux operating system, which has been optimized for performance, is compatible with the recently released SAP S/4HANA.
Since SAP HANA processes compressed data in RAM memory rather than storing and retrieving it from a hard drive like other databases do, memory is extremely important.
Calculating a SAP System’s hardware needs, such as its physical memory, CPU power, and I/O capacity, is referred to as sizing. Determining the necessary dimensions guarantees that customers purchase hardware in accordance with their business needs, as well as lower costs and lower TCO.SAP HANA’s three primary KPIs for sizing are
- Main memory (RAM) space
- CPU processing performance
- Disk size
There are two types of RAM requirements for SAP HANA: static and dynamic.
Static RAM Requirement: The quantity of primary memory utilised to store the table data is referred to as the static RAM requirement. HANA’s memory sizing is determined by the amount of data that is to be stored in memory.
Dynamic RAM Requirement: When new data is loaded or queries are run, additional main memory is needed for objects that are created dynamically. Since SAP advises allocating the same amount of memory for static and dynamic objects, the static RAM is multiplied by two to determine the total RAM.
Identify the data that has to be transferred to the SAP HANA database (either through replication or extraction). This must be done at the table level because clients often only choose a portion of the data from their ERP or CRM database.
Since the sizing process is based on uncompressed source data size, it must also be taken into account if the source database employs compression. Database tools can be used to gather the data needed for this stage. A script that supports this operation for several database systems, including DB2 LUW and Oracle, is contained in SAP Note 1514966.
The Source Data Footprint refers to the total size of all the tables in the source database that are currently used to store the necessary data (without DB indexes).
The anticipated compression in a side-by-side HANA situation was 1:5. The compression ratio rose to 1:7 thanks to improved compression techniques.
Customers are circulating examples of factors exceeding 1:50 that have been attained, but there are also far greater figures. Depending on how the data is distributed, SAP HANA can achieve a variety of compression ratios.
The amount of RAM needed to hold the data in the SAP HANA database is referred to as the static RAM size.
Considering a 7 compression factor:
RAM (Static) = Data Footprint / 7
When new data is loaded or queries are run, dynamic RAM is used to provide the additional main memory needed for objects that are formed dynamically. SAP advises maintaining the same amount of static and dynamic RAM.
RAM (Dynamic) = RAM (Static)
RAM (total) = RAM (dynamic) + RAM (static)
= Data Footprint x 2/7
The overall RAM allocation should be rounded to the next T-shirt size. For instance, if the entire amount of RAM is 400 GB, a M should be chosen for the T-shirt size.
Even though SAP HANA is an in-memory database, It still needs disc storage space. for example, to keep database information if the system shuts down unintentionally or as a result of a power outage.
Disk sizing can be divided into two categories:
- Persistence Layer (also called Data Volume)
- Disk Log (also called Log Volume)
Persistence Layer (Data Volume): To ensure a complete copy of the business data on disc in Data Volume, data updates in the database are periodically copied to disc in Data Volume(Persistence Layer).
Based on the overall amount of RAM, the capacity for this storage is determined:
Disk (Persistence) = 4 x RAM (Total)
Disk Log (Log Volume): To guarantee that changes are persistent and the database may be returned to the latest committed state upon a restart, Log Volume saves log files. The Log Volume must be at least as large as the main RAM of the SAP HANA server.
DISK (Log) = 1 x RAM (Total)
Note: There is no need to carry out this disc sizing since the authorised hardware configurations already take these guidelines into account. We still mention it here, though, for your benefit.
When a large number of users are anticipated to operate on a little amount of data, a CPU sizing must be conducted in addition to the memory sizing. Select the T-shirt size that meets the needs for both the CPU and RAM.
User-based CPU sizing is used. 300 SAPS must be supported by the SAP HANA system for each continuously active user. Depending on the server model, the servers utilized for the IBM Systems Solution for SAP HANA support 60 to 65 concurrently active users per CPU.
CPU : 300 SAPS / Active User
Because SAP HANA is scalable, you may connect several physical servers or cloud servers into a single logical database instance and obtain linear performance improvements as you add more servers to the SAP HANA cluster. To configure SAP HANA or SAP S/4HANA Database Server clusters in accordance with best practises, including SAP HANA system replication, use a YaST wizard.
Scale-out refers to the integration of numerous independent nodes/computers into a single system. Several suppliers have currently received SAP HANA certification for multi-node scale out. Literally, you only need to add another node or server to the system to get an exponential rise in performance along with the extra memory.
Complete high availability is built into SAP HANA. It provides recovery strategies for problems and software issues as well as catastrophes that force the shutdown of a complete data center. A collection of methods, engineering procedures, and design ideas that help achieve business continuity are together referred to as high availability.
Fault tolerance and the capacity to quickly restart operations after a system outage with little business loss are two ways that high availability is achieved
Cold standby hosts are supported by SAP HANA, which means that a standby host is maintained available in case a failover problem develops during production. Some servers in a distributed system are classified as worker hosts, while others operate as standby hosts. Importantly, you can give each group a number of standby hosts. As an alternative, you can combine several servers to develop a separate standby host for each group.
Database processing does not take place on a standby host. On the standby host, all database processes are active, but they are idle and do not enable SQL connections.
To achieve optimal efficiency, the SAP HANA database keeps the majority of its data in memory, but it also uses persistent storage as a backup in case something goes wrong.
Data is automatically saved from memory to disc at predetermined save-points throughout typical database operations. All data modifications are also noted in the log. Immediately following each committed database transaction, the log is stored from memory to SSD. The database may be restarted following a power outage in the same manner as a disk-based database, and it restores its previous consistent state by replaying the log since the previous save-point.
Save-points and log writing can shield your data against power outages, but they are useless if the persistent storage is broken. Backups are necessary to guard against data loss due to disc failures. Backups store the data and log area contents in several places. These backups are made while the database is active so that users can carry on as usual. The backups have a very small effect on system performance.
The work of the services on the failed server is transferred to the services operating on the standby host if the SAP HANA system detects a failover situation. According to the system’s failover strategy, the failed volume and all associated tables are reallocated and put into memory. The servers’ entire persistence is saved on a shared disc, therefore this reassignment can be done without moving any data. Every server has access to the same discs through shared storage, where data and logs are kept.
The system waits a short while to see if the service can be restarted before performing a failover. The status is “Waiting” while this is happening. This process could take a minute. The entire failover detection and loading procedure could take several minutes.