Cluster: mudanças entre as edições

De Instituto de Física - UFRGS
Ir para navegaçãoIr para pesquisar
Linha 68: Linha 68:
=== Conect to  cluster-slurm ===
=== Conect to  cluster-slurm ===


The clusters are accessible through server cluster-slurm.if.ufrgs.br (ou ada.if.ufrgr.br). To access through a unix-like system use:
The clusters are accessible using the  UFRGS virtual prived network (vpn) through server cluster-slurm.if.ufrgs.br.  
To access through a unix-like system use:
<pre>
<pre>
ssh <user>@lovelace.if.ufrgs.br
ssh <user>@lovelace.if.ufrgs.br

Edição das 11h37min de 3 de abril de 2024

Clusters Ada and Lovelace - Instituto de Física UFRGS

The clusters are located at Instituto de Física da UFRGS, in Porto Alegre.

Management Committee

Users Committee

Infraestruture

Management Software

Slurm Workload Manager(https://slurm.schedmd.com/)

Number of jobs per user controlled on demand.

Number of users on 1/24/2023: 150

Account request: mail to fisica-ti@ufrgs.br

Hardware in ada nodes

CPU: 16 nodes x86_64
RAM: varies between 8 GB - 16 GB
GPU: 3 nodes with NVIDIA CUDA
Storage: storage Asustor 12TB
Inter-node connection: Gigabit

Hardware in lovelace nodes

CPU: Ryzen (32 and 2*24 cores) + AMD 16 cores
RAM: 64 GB each
GPU: hree nodes with NVIDIA CUDA
Storage: storage Dell 12TB 
Conection inter-nodes: Gigabit

Installed Software

OS: Debian 8 (in ada)
OS: Debian 12 (in lovelace)
Basic packages installed:
gcc
gfortran
python: torch, numba
julia
conda
compucel3d
espresso
gromacs
lammps
mesa
openmpi
povray
quantum-espresso
vasp

How to use

Conect to cluster-slurm

The clusters are accessible using the UFRGS virtual prived network (vpn) through server cluster-slurm.if.ufrgs.br. To access through a unix-like system use:

ssh <user>@lovelace.if.ufrgs.br

Under windows you may use winscp.

If you are not registered, ask for registration sending an email to fisica-ti@ufrgs.br

Using softwares in the cluster

To execute a software in a cluster job this program must:

1. Be already installed

OR

2. Be copied to the user home

Ex:

scp my_programm <user>@cluster-slurm.if.ufrgs.br:~/

If you are compiling your program in the cluster, one option is to user gcc.

Ex:

scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/
ssh <user>@cluster-slurm.if.ufrgs.br:~/
cd source-code
gcc main.c funcoes.c

This will generate file a.out, which is the executable.

Being accessible by methods 1 or 2, the program can be executed in the cluster through one JOB.

OBS: If you execute your executable without submitting as JOB, it will be executed in the server, not in the nodes. This is not recommended since the server computational capabilities are limited and you will be slowing down the server for everyone else.

Criating and executing a Job

Slurm manages jobs and each job represents a program or task being executed.

To submit a new job, you must create a script file describing the requisites and characteristics of the Job.

A typical example of the content of a submission script is below

Ex: job.sh

#!/bin/bash 
#SBATCH -n 1 # Number of cpus to be allocated (Despite the # these SBATCH lines are compiled by the slurm manager!)
#SBATCH -N 1 # Nummber of nodes to be allocated  (You don't have to use all requisites, comment with ##)
#SBATCH -t 0-00:05 # Limit execution time (D-HH:MM)
#SBATCH -p long # Partition to be submitted
#SBATCH --qos qos_long # QOS 
  
# Your program execution commands
./a.out

In option --qos, use the partition name with "qos_" prefix:

partition: short -> qos: qos_short -> limit 2 weeks

partition: long -> qos: qos_long -> limit de 3 month

If you run on GPU, specify the "generic resource" gpu in cluster ada:

#!/bin/bash 
#SBATCH -n 1 
#SBATCH -N 1
#SBATCH -t 0-00:05 
#SBATCH -p long 
#SBATCH --qos qos_long # QOS 
#SBATCH --gres=gpu:1
  
# Comandos de execução do seu programa:
./a.out

To ask for a specific gpu:

#SBATCH --constraint="gtx970"

To submit the job, execute:

sbatch job.sh

Usefull commands

  • To list jobs:
 squeue
  • To list all jobs running in the cluster now:
 sudo squeue
  • To delete a running job:
 scancel [job_id]
  • To list available partitions:
 sinfo
  • To list gpu's in the nodes:
 sinfo -o "%N %f"
  • To list characteristic of all nodes:
 sinfo -Nel