Documentation Index
Fetch the complete documentation index at: https://docs.nyc-ai.app/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites:
- You have a project that needs consortium-scale AI or HPC resources.
- A PI is prepared to sponsor the request.
- You know whether the workload is GPU-heavy, CPU-heavy, or storage-intensive.
Submit a Work Order Request
Start with the institutional Work Order Request (WOR) process. The PI should describe the project, explain the public-interest or research value, justify the hardware needs, and request the expected compute and storage footprint.
Complete institutional review
Empire AI access is routed through consortium governance, not simple self-signup. Your home institution typically reviews the request first, then forwards approved work into the shared Empire AI process.
Create or activate your CCR ColdFront access
Once the request is approved, account creation and allocation management flow through the University at Buffalo Center for Computational Research (CCR) ColdFront portal. This is where approved users receive the allocation context needed to begin using the environment.
Move data into the environment
Use high-volume data transfer tools for large datasets. The materials you provided call out Globus as the preferred bulk-transfer path, including the Empire AI Alpha endpoint. Smaller transfers can use SFTP through tools such as FileZilla or CyberDuck.
Choose the right Slurm partition
Submit GPU-centric AI jobs to the
suny partition. Submit CPU-bound data processing jobs to the cpu partition. Keep requests tightly scoped to the actual workload so that queue usage and SU consumption stay reasonable.Run and monitor jobs
Use standard Slurm tooling to submit and inspect work. If you need a refresher on script shape, queue monitoring, or job cancellation, see Job submission, which covers the local HPCC Slurm workflow in detail.
Minimal job skeleton
cpu partition and remove the GPU request.
Data transfer notes
- Use Globus for large datasets and repeated transfers.
- Use SFTP for smaller ad hoc uploads.
- Plan storage consumption up front, because persistent storage accrues SU cost.
- If a dataset exceeds 100 TB, expect an additional administrative review path.