Beta: This is a preview and may contain discrepancies; data and visuals are still stabilizing. Please check back soon for the 1.0 release.

Quantum computing benchmarks

Explore the latest metriq-gym results. Switch between an interactive Graph for trends and a Table for the raw numbers.

FAQ

What if a result looks wrong?
Please create an issue in the metriq-data GitHub repository.
Can I add benchmarks results?
Yes, in fact everyone is encouraged to. Please use the upload command from the metriq-gym CLI.
How was the baseline device chosen?
The device ibm_torino from IBM Quantum Cloud is used as a consistent reference point for the derived “score” scale (so 100 means “on par with the baseline”); it was chosen because it is a widely-run, stable platform that provides good coverage across benchmarks. Note this choice does not bias the data: raw benchmark metrics are unchanged, and the baseline is only an anchor for normalization/visual reference (and can be changed in the configuration of the dataset). Picking a different baseline would primarily rescale the scores, but the relative comparisons between platforms would remain the same.
What are Metriq's "classical" inspirations?
Metriq draws inspiration from established “classical” benchmarking efforts like Geekbench for CPU/GPU, and MLCommons and Epoch AI for AI/ML models, adapted to the needs and constraints of quantum hardware and workflows.

Citation & licence

If you use Metriq data in a publication, please cite our article (link coming soon).

Data license

Except where otherwise noted, benchmark data is released under the Creative Commons Attribution 4.0 International license (CC BY 4.0). You are free to share and adapt the data, provided you give appropriate credit and indicate if changes were made.