HPCC storage is split across a small, durable home filesystem and a large, ephemeral scratch filesystem. Use them for different things.Documentation Index
Fetch the complete documentation index at: https://docs.nyc-ai.app/llms.txt
Use this file to discover all available pages before exploring further.
Where things go
| Path | Purpose | Quota (default) | Backed up? | Persistence |
|---|---|---|---|---|
/global/u/<username> | Home. Source code, scripts, notes, results worth keeping. | 50 GB / 10,000 files | Yes (tape) | Long-term |
/scratch/<username> | Scratch. Working area for running jobs. | Large, shared | No | Temporary |
/cunyZone/home/<project> | Project space for group work. | Allocated per project | Depends on project | Allocated per project |
Checking your usage
Transferring files in and out
Three mechanisms are supported:Globus
Preferred for large transfers. Auto-tuning, parallel streams, fault recovery.
SFTP / SCP
Quick and familiar. Best for small-to-medium files.
iRODS
For projects that are already on an iRODS grid.
Globus (recommended for large data)
Typical throughput is 100–400 Mbps per transfer.- Create a free Globus account at globus.org.
- Add the CUNY HPCC endpoint
cunyhpc#ceaas the source or destination. - Use the other endpoint (your laptop via Globus Connect Personal, or XSEDE/ACCESS, etc.) as the matching end of the transfer.
SFTP / SCP
Transfer directly tocea.csi.cuny.edu, the HPCC data transfer node:
cea under its own name, so you can drop files onto a specific server’s scratch:
iRODS
iRODS is supported for projects already using an iRODS grid. Contact the HPC Helpline to be bootstrapped onto the correct zone.Backup and retention
- Home is backed up to tape. Restoration is possible; contact the helpline if you need it.
- Scratch is not backed up. Treat it as working space that may vanish. Every job script should end by copying keepers back to home, or to an archive tier such as project space.
Data handling
If you work with regulated data (HIPAA, FERPA, IRB-protected human subjects data, export-controlled, etc.), don’t place it on the cluster without first confirming with HPCC staff and your IRB/compliance officer. Not every partition is configured for sensitive workloads.Next steps
Job submission
Templates that already
cd $SLURM_SUBMIT_DIR under /scratch.Policies
Rules around account sharing, login-node activity, and acceptable use.