Reference Deployment ArchitecturesThese are not fixed packages.
They are starting points that we tune to mission, performance horizon, and governance requirements.
Class
Example Node Count
Compute Fabric
Data & Storage Fabric
Data Control & Mobility
Operational /
Commercial Model
APEX SENTINEL
(0.3–3MW)
4–32 GPU Nodes
Hitachi iQ M-Series modular clusters
VSP One Block + HCSF (File) + HCP Object
Hammerspace optional (only if multi-location sync required)
EverFlex Pay-Per-Use or CAPEX, Blaq Cat Managed or Autonomous
APEX VANGUARD
(4–10MW)
16–64 GPU Nodes
NVIDIA HGX / DGX BasePOD (County-scale cores)
VSP One High-Availability Block + Object, 100% uptime guarantee
Hammerspace Global Namespace enforcing County Boundary Data Sovereignty
EverFlex Hybrid + County Investment / Regeneration Model
APEX LEGION
(12–50MW)
64–256 GPU Nodes
DGX BasePOD + NVLink Superpod Fabrics
VSP One + HCP + Active/Active metro replication
Hammerspace + GPUdirect Storage Pipelines
EverFlex + RC+ Regional Private Cloud Financing
APEX FORTRESS
(52–100MW)
256–1,024 GPU Nodes
National-Scale HGX / Custom Fabric Architectures
VSP 5600 / 5600H, 8–9 nines availability resilience envelope
Hammerspace Ultra-Wide Data Mesh (Nationwide Data Residency Enforcement)
Multi-Department or National Strategic Compute Frameworks
APEX DOMINION
100MW+
1,024+ GPU Nodes
(Strategic AI Power Regions)
Custom AI/HPC Design Authority
Multi-region replicated VSP One sovereign data plane
Hammerspace Global Data Sphere with jurisdictional isolation and cryptographic enclaves
Joint Command Governance / Strategic Alliance Treaties