Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nyc-ai.app/llms.txt

Use this file to discover all available pages before exploring further.

This walkthrough takes you from a fresh HPCC account to a running job in about ten minutes. You’ll log in, load a module, submit a minimal SLURM job, and read its output.
Prerequisites: An approved HPCC account. If you don’t have one, start with Accounts & access.

Step 1: Log in

From your laptop, SSH to the Chizen gateway and then onto the cluster you want to use (Arrow is the default for most users):
ssh <your_username>@chizen.csi.cuny.edu
ssh <your_username>@arrow
If you’re off-campus and connections fail, confirm you’re reaching an externally accessible gateway (Chizen or Karle only). All other systems require you to go through one of those first.

Step 2: Check your storage

Home and scratch are separate. Home is small and durable; scratch is larger and temporary. Check both:
df -h ~                    # your home directory (/global/u/<username>)
df -h /scratch/$USER       # your scratch workspace
The default home quota is 50 GB / 10,000 files. Jobs must run out of scratch; don’t launch from your home directory. See Storage & quotas for full details.

Step 3: Find a module

The HPCC uses the LMOD environment module system on Arrow. Browse what’s available:
module avail            # list all available modules
module spider python    # search for Python variants and their dependencies
module list             # show what's currently loaded
Load the module you want:
module load Python/3.10.4       # exact version string may differ; check module avail
See Software & modules for the full workflow.

Step 4: Write your first SLURM script

Move to your scratch directory and create hello.sh:
cd /scratch/$USER
mkdir -p hello-slurm && cd hello-slurm
hello.sh
#!/bin/bash
#SBATCH --job-name=hello
#SBATCH --output=hello.out
#SBATCH --error=hello.err
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=1G
#SBATCH --time=00:01:00

cd $SLURM_SUBMIT_DIR
echo "Hello from $(hostname) - job $SLURM_JOB_ID"
sleep 5
echo "Done."
The minimal template above omits --qos and --partition, which real HPCC jobs typically require (for example --qos=qoschem --partition=partchem). Ask your PI or the HPC Helpline which QOS / partition applies to your project before submitting production work.

Step 5: Submit and monitor

sbatch hello.sh          # submit; prints the job ID
squeue -u $USER          # see your jobs in the queue
scancel <jobid>          # cancel if needed
Once the job completes, the hello.out file in your working directory will contain the output.
cat hello.out

What next

More SLURM examples

OpenMP, MPI, hybrid, GPU, and array templates.

Software & modules

Load compilers, MPI libraries, Python, Julia, and more.

Storage & transfers

Move data in and out with SFTP, Globus, or iRODS.

Policies

Rules every HPCC user needs to follow.