How to measure VPS performance: CPU, NVMe, network
A minimal test suite for comparing providers.
Performance testing starts with CPU. Use sysbench or Geekbench and run tests at different times to see stability rather than one peak. Look at dips and variance. Large swings often indicate noisy neighbors.
Storage should be tested with fio. Measure not only sequential writes but also random operations that matter for databases. NVMe usually provides a strong boost, but verify real IOPS and latency. For RAM, stream or mbw helps estimate memory bandwidth.
Test network with iperf in both directions and from multiple regions. Check throughput limits and how the network behaves under load. Measure packet loss and jitter, especially for media and voice services.
Collect results in a table and compare them with price. Tests must be repeatable with the same settings and duration. Record kernel version, virtualization type and plan parameters. This approach helps you choose based on facts, not promises.
Benchmarks must be repeated. Run tests at different times of day and compare median values, not single spikes. Account for cache warm up. This gives a realistic picture instead of random luck.
Use both synthetic and real workload tests. Beyond fio and iperf, load your application and measure response time, queue depth and errors. Compare results with cost to estimate price per performance.
Watch CPU steal time, network jitter and clock stability. These signals show how much noisy neighbors affect performance.
Document your benchmark configuration: OS version, kernel settings, block size and disk parameters. Without that, comparisons between providers are unreliable. Repeatability matters more than absolute numbers.
Use the same test duration and dataset size each time. Short tests can hide throttling and thermal limits, while longer runs expose sustained performance. Record resource usage during tests to spot bottlenecks. Consistency makes comparisons meaningful.
Keep raw logs of test results and share them with the team. Trend analysis is more valuable than a single snapshot. Store benchmark scripts in a repo so the team can rerun the same method.