The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest-performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. As the engine of the NVIDIA data center platform, A100 provides up to 20X higher performance over the prior NVIDIA Volta™ generation.

NVIDIA A100 Tensor Core technology comes with a wide range of math precisions, providing a single accelerator for every workload. The latest generation A100 80GB doubles GPU memory and debuts the world’s fastest memory bandwidth at 2 terabytes per second (TB/s)

Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
A100-LIX-0 | 1 x 10 GB | 4 vCPUs | 30 GB | 250 GB | ₹15000 | |
A100-LIX-01 | 1 x 40 GB | 16 vCPUs | 115 GB | 1500 GB | ₹71,250 | |
A100-LIX-02 | 2 x 40 GB | 32 vCPUs | 230 GB | 3000 GB | ₹1,42,500 | |
A100-LIX-03 | 4 x 40 GB | 64 vCPUs | 460 GB | 6000 GB | ₹2,85,000 | |
A100-LIX-04 | 8 x 40 GB | 128 vCPUs | 920 GB | 6000 GB | ₹6,65,000 | |
A100-LIX-05 | 1 x 80 GB | 16 vCPUs | 115 GB | 1500 GB | ₹95,000 | |
A100-LIX-06 | 2 x 80 GB | 32 vCPUs | 230 GB | 3000 GB | ₹1,90,000 | |
A100-LIX-07 | 4 x 80 GB | 64 vCPUs | 460 GB | 6000 GB | ₹3,80,000 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
A100-WIN-01 | 1 x 40 GB | 16 vCPUs | 115 GB | 1500 GB | ₹75,671 | |
A100-WIN-02 | 2 x 40 GB | 32 vCPUs | 230 GB | 3000 GB | ₹1,50,085 | |
A100-WIN-03 | 2 x 40 GB | 64 vCPUs | 460 GB | 6000 GB | ₹2,98,913 | |
A100-WIN-04 | 8 x 40 GB | 128 vCPUs | 920 GB | 6000 GB | ₹6,91,569 | |
A100-WIN-05 | 1 x 80 GB | 16 vCPUs | 115 GB | 1500 GB | ₹99,421 | |
A100-WIN-06 | 2 x 80 GB | 32 vCPUs | 230 GB | 3000 GB | ₹1,97,585 | |
A100-WIN-07 | 4 x 80 GB | 64 vCPUs | 460 GB | 6000 GB | ₹3,93,913 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
A40-LIX-01 | 1 x 48 GB | 16 vCPUs | 100 GB | 750 GB | ₹51,775 | |
A40-LIX-02 | 2 x 48 GB | 32 vCPUs | 200 GB | 1500 GB | ₹1,03,550 | |
A40-LIX-03 | 4 x 48 GB | 64 vCPUs | 400 GB | 3000 GB | ₹2,07,100 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
A40-WIN-01 | 1 x 48 GB | 16 vCPUs | 100 GB | 750 GB | ₹56,907 | |
A40-WIN-02 | 2 x 48 GB | 32 vCPUs | 200 GB | 1500 GB | ₹1,12,208 | |
A40-WIN-03 | 4 x 48 GB | 64 vCPUs | 400 GB | 3000 GB | ₹2,22,811 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
A30-LIX-01 | 1 x 24 GB | 16 vCPUs | 90 GB | 640 GB | ₹38,000 | |
A30-LIX-02 | 2 x 24 GB | 32 vCPUs | 180 GB | 1280 GB | ₹76,000 | |
A30-LIX-03 | 4 x 24 GB | 64 vCPUs | 360 GB | 2560 GB | ₹1,52,000 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
A30-WIN-01 | 1 x 24 GB | 16 vCPUs | 90 GB | 640 GB | ₹42,777 | |
A30-WIN-02 | 2 x 24 GB | 32 vCPUs | 180 GB | 1280 GB | ₹83,940 | |
A30-WIN-03 | 4 x 24 GB | 64 vCPUs | 360 GB | 2560 GB | ₹1,66,268 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
V100-LIX-01 | 1 x 32 GB | 8 vCPUs | 120 GB | 900 GB | ₹40,000 | |
V100-LIX-02 | 2 x 32 GB | 16 vCPUs | 240 GB | 1800 GB | ₹60,000 | |
V100-LIX-03 | 4 x 32 GB | 32 vCPUs | 480 GB | 3600 GB | ₹1,44,000 |
Plan | GPU Memory | vCPUs | Dedicated Ram | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
V100-WIN-01 | 1 x 32 GB | 8 vCPUs | 120 GB | 900 GB | ₹50,349 | |
V100-WIN-02 | 2 x 32 GB | 16 vCPUs | 240 GB | 1800 GB | ₹75,671 | |
V100-WIN-03 | 4 x 32 GB | 32 vCPUs | 480 GB | 3600 GB | ₹1,78,585 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
RTX-8000-LIX-01 | 1 x 48GB (DDR6) | 16 vCPUs | 115 GB | 900 GB | ₹38,000 | |
RTX-8000-LIX-02 | 2 x 48GB (DDR6) | 32 vCPUs | 230 GB | 1800 GB | ₹76,000 | |
RTX-8000-LIX-03 | 4 x 48GB (DDR6) | 64 vCPUs | 460 GB | 3600 GB | ₹1,52,000 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
RTX-8000-WIN-01 | 1 x 48GB | 16 vCPUs | 115 GB | 900 GB | ₹42,421 | |
RTX-8000-WIN-02 | 2 x 48GB | 32 vCPUs | 230 GB | 1800 GB | ₹83,585 | |
RTX-8000-WIN-03 | 4 x 48GB | 64 vCPUs | 460 GB | 3600 GB | ₹1,66,268 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
T4-LIX-01 | 1 x 16 GB | 12 vCPUs | 50 GB | 900 GB | ₹16,625 | |
T4-LIX-02 | 2 x 16 GB | 24 vCPUs | 100 GB | 1800 GB | ₹34,200 |
Plan | GPU Memory | vCPUs | Dedicated RAM | SSD Storage | Monthly Billing | |
---|---|---|---|---|---|---|
T4-WIN-01 | 1 x 16 GB | 12 vCPUs | 50 GB | 900 GB | ₹20,255 | |
T4-WIN-02 | 2 x 16 GB | 24 vCPUs | 100 GB | 1800 GB | ₹40,203 |
Applications & Use cases



A100 adds a powerful new third-generation Tensor Core that boosts throughput over V100 while adding comprehensive support for DL and HPC data types, together with a new Sparsity feature that delivers a further doubling of throughput.
New TensorFloat-32 (TF32) Tensor Core operations in A100 provide an easy path to accelerate FP32 input/output data in DL frameworks and HPC, running 10x faster than V100 FP32 FMA operations or 20x faster with sparsity. For FP16/FP32 mixed-precision DL, the A100 Tensor Core delivers 2.5x the performance of V100, increasing to 5x with sparsity.
New Bfloat16 (BF16)/FP32 mixed-precision Tensor Core operations run at the same rate as FP16/FP32 mixed-precision. Tensor Core acceleration of INT8, INT4, and binary round out support for DL inferencing, with A100 sparse INT8 running 20x faster than V100 INT8. For HPC, the A100 Tensor Core includes new IEEE-compliant FP64 processing that delivers 2.5x the FP64 performance of V100.
The NVIDIA A100 GPU is architected to not only accelerate large complex workloads, but also efficiently accelerate many smaller workloads. A100 enables building data centers that can accommodate unpredictable workload demand, while providing fine-grained workload provisioning, higher GPU utilization, and improved TCO.
The NVIDIA A100 GPU delivers exceptional speedups over V100 for AI training and inference workloads as shown in Figure 2. Similarly, Figure 3 shows substantial performance improvements across different HPC applications.
A huge Breakthroughs in GPU architecture
The NVIDIA A100 GPU is a technical design breakthrough that is fueled by five key innovations, and the outcome of these innovations result in 6x higher performance than NVIDIA’s previous generation Volta architecture for training and 7x higher performance for inference. The innovations are mentioned below:
- NVIDIA Ampere Architecture: At the core of this chip is the NVIDIA Ampere GPU architecture that contains more than 54 billion transistors and hence, making it the largest 7-nanometer processor in the world
- Third-generation Tensor Cores with TF32: The A100 includes new TF32 for artificial intelligence that allows for up to 20x the AI performance of FP32 precision, without making any changes in the code.
- Multi-Instance GPU: Multi-instance GPU or MIG is a new technology that allows multiple networks to operate simultaneously on a single A100 GPU for optimal utilization of computing resources. It enables a single A100 GPU to be partitioned into as many as seven separate GPUs in order to deliver varying degrees of computing for jobs of different sizes while providing optimal utilization as well as maximizing return on investment.
- Third-Generation NVIDIA NVLink: The 3rd-generation NVIDIA NVLink helps in doubling the high-speed connectivity between GPUs, and thus provide efficient performance scaling in a server.
- 5.Structural Sparsity: Structural sparsity is a new technique that harnesses the inherently sparse nature of AI math to double the performance.