Empire AI uses Service Units (SUs) as an internal accounting model for compute and storage. The goal is not only billing. It is also governance. A shared SU framework lets the consortium compare different hardware tiers and prevent well-funded groups from consuming a disproportionate share of the resource.Documentation Index
Fetch the complete documentation index at: https://docs.nyc-ai.app/llms.txt
Use this file to discover all available pages before exploring further.
Base rates
| Resource | SU rate |
|---|---|
| 1 hour of Alpha H100 GPU time | 1.0 SU |
| 1 hour of Beta B200 / Blackwell GPU time | 2.0 SU |
| 1 hour of Grace CPU node time | 0.5 SU |
| Persistent storage | 8.333 SU per TB per month |
Cost formula
Empire AI calculates job cost using a simple model:Queue multipliers
| Queue model | Multiplier |
|---|---|
| Priority queue | 2.0x |
| Shared resource queue | 0.5x |
Institutional underwriting
The SU ledger is shared, but the way those SUs are funded can vary by institution. The materials you provided use Cornell as an example:- Cornell subsidizes SU usage through the Provost’s office until November 2026.
- After that date, 1 SU carries an internal chargeback rate of $0.50.
Why this matters to researchers
If you are planning work on Empire AI, the SU framework shapes three practical decisions:- which hardware tier you request
- how much queue priority you actually need
- how long you retain large datasets