How Does Datrium Scale?

Datrium's Split Provisioning keeps your active data in flash and inactive data in durable storage, so you only pay for the compute and storage you need.

All Flash Compute Nodes

for data performance

High Performance Flash Storage Datrium Solutions

Datrium Data Nodes

for scalable data availability

Durable Data Datrium Solutions

Icon Green Padlock Datrium Solutions Datrium feeds input/output (IO) processing from durable storage using two components: All Flash compute nodes and Datrium data nodes. Datrium sources data from the optimal location for your app, all with Blanket Encryption.

  • Mixed workloads on a single cluster
  • High performance scales linearly
  • Zero Storage Management
  • Split Provisioning
  • 5-10X Better App Performance
  • 10X Faster Recovery Time
BG Servers Clouds Datrium Solutions

What Is Blanket Encryption?

Blanket Encryption is a software-based end-to-end solution that protects data in-use at the host, in-flight across the network, and at-rest on persistent storage.

Add compute nodes to scale application processing or data nodes to scale capacity, independent of each other.

Compute and Data Nodes Datrium Solutions

HCI Doesn’t Scale

datrium scale HCI doesnt scale

Hyper-converged Infrastructures (HCI) try to deal with scaling by moving storage functionality into the server (HCI node). This means that while you’ve addressed the array controller bottleneck, capacity cannot scale independent of performance.

To add capacity, you need to add another HCI node and all the server software licenses associated with it. If a server is down, storage is down, reducing data redundancy. If data rebuild gets triggered, the network gets clogged.

Mirroring data increases network traffic to each of the mirror nodes, and mirrored writes lead to neighbor noise from east-west network traffic between the nodes. The result? Sub-par network performance at scale.

Datrium Won’t Compromise Performance

Scalability is built into the Datrium + Equus architecture. Compute nodes having variable amounts of CPU/RAM/flash, and different hypervisors are at the top of the architecture. These contain the guest VMs and DVX software. All CPU intensive operations (fingerprinting, encryption, compression, and erasure coding) are performed on these where modern and cost-effective CPU horsepower is abundant.

With SSD/NVMe cache, all VM reads are satisfied locally. When VMs move, DVX reads data from the remote compute node that previously hosted the VM until the local cache has been warmed up. Replication is server powered, with data traveling between compute nodes (flash to flash) at the source and destination.

Data nodes contain dual active/standby controllers with redundant NVRAM, network interfaces, and a pool of 12 disk drives per node. Beginning with one data node, more capacity and write performance can be added incrementally. This is the second dimension of scaling, where split provisioning adds enormous value.

datrium scale won't compromise performance

Scale-Out Backup: Simple, Granular, and Agile Data Management

datrium scale scale out backup

Datrium Scale-Out Backup eliminates a variety of challenges that prevent customers from taking greater advanatage of copy data management opportunities to derive business value. By combining fine granularity with dynamic policy-based management, Scale-Out Backup empowers customers to create and use point-in-time copies of data in the most efficient, fast and effective manner.

  1. Snapshot a specific set of virtual disks versus all virtual disks, which saves space.
  2. Restore a specific virtual disk from a snapshot to a live VM, which improves RTO and reduces admin time.
  3. Clone a specific virtual disk from a snapshot and attach to a live VM for dev/test or analytics/reporting use-cases, improving time spend on software development versus infrastructure.
  4. Mass clone an OS virtual disk for fast & efficient provisioning of VMs.

Equus Bot