Cluster: mudanças entre as edições

De Instituto de Física - UFRGS
Ir para navegaçãoIr para pesquisar
Sem resumo de edição
 
(58 revisões intermediárias por 3 usuários não estão sendo mostradas)
Linha 1: Linha 1:
== Submeter jobs ==
= Cluster Lovelace - Instituto de Física UFRGS =
Para rodar, criar um arquivo (script.sh) com o seguinte conteúdo.


  #!/bin/bash  
 
  #SBATCH -n 1 # Number of cores
The cluster is located at Instituto de Física da UFRGS, in Porto Alegre.
  #SBATCH -N 1 # Number of nodes  
 
  #SBATCH -t 0-00:05 # Runtime in D-HH:MM  
== Management Committee ==
  #SBATCH -p long # Partition to submit to
 
  #SBATCH --qos qos_long # QOS  
<pre>
 
The cluster is managed by professors representing the fields of Astronomy, Theoretical Physics, and Experimental Physics, in addition to an IT department employee from the Physics Institute.
 
Astronomy: Rogério Riffel
 
Theoretical Physics: Leonardo Brunnet
 
Experimental Physics: Pedro Grande
 
TI employee: Gustavo Feller
 
</pre>
 
== Users Committee ==
 
<pre>
 
Users have two channels for communication/discussion:
 
1) The fis-linux-if@grupos.ufrgs.br mailing list
 
2) Direct messages to the IT department via the email fisica-ti@ufrgs.br.
 
</pre>
 
== Infraestruture ==
 
=== Management Software ===
 
The system of queues and scheduling of tasks is controlled by the [https://slurm.schedmd.com/ Slurm Workload Manager].
 
<pre>
 
Number of jobs per user controlled on demand.
 
Number of users on 1/24/2023: 150
 
Account request: mail to fisica-ti@ufrgs.br
</pre>
 
=== Hardware in lovelace nodes ===
 
<pre>
CPU: Ryzen (32 and 2*24 cores) + AMD 16 cores
RAM: 64 GB each
GPU: Three nodes with NVIDIA CUDA
Storage: storage Dell 12TB
Conection inter-nodes: Gigabit
</pre>
 
=== Installed Software ===
 
<pre>
OS: Debian 12
Basic packages installed:
gcc
gfortran
python: torch, numba
julia
conda
compucel3d
espresso
gromacs
lammps
mesa
openmpi
povray
quantum-espresso
vasp
</pre>
 
== Rules for scheduling, access control, and usage of the research infrastructure ==
 
=== Online scheduling ===
 
The cluster is accessible using the  UFRGS virtual prived network ([https://www1.ufrgs.br/CatalogoServicos/servicos/servico?servico=3178 vpn]) through server lovelace.if.ufrgs.br.
 
To access through a unix-like system use:
<pre>
ssh <user>@lovelace.if.ufrgs.br
</pre>
 
Under windows you may configure winscp to enter the address lovelace.if.ufrgs.br.
 
If you are not registered, ask for registration sending an email to fisica-ti@ufrgs.br
 
=== Using softwares in the cluster ===
 
To execute a software in a cluster job this program must:
 
1. Be already installed
OR
 
2. Be copied to the user home
 
Ex:
<pre>
scp my_programm <user>@cluster-slurm.if.ufrgs.br:~/
</pre>
 
If you are compiling your program in the cluster, one option is to use <code>gcc</code>.
 
Ex:
<pre>
scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/
ssh <user>@cluster-slurm.if.ufrgs.br:~/
cd source-code
gcc main.c funcoes.c
</pre>
This will generate file <code>a.out</code>, which is the executable.
 
Being accessible by methods 1 or 2, the program can be executed in the cluster through one <strong>JOB</strong>.
 
OBS: If you execute your executable without submitting as <strong>JOB</strong>, it will be executed in the server, not in the nodes. This is not recommended since the server computational capabilities are limited and you will be slowing down the server for everyone else.
 
=== Criating and executing a Job ===
 
Slurm manages jobs and each job represents a program or task being executed.
 
To submit a new job, you must create a script file describing the requisites and characteristics of the Job.
 
A typical example of the content of a submission script is below
 
Ex: <code>job.sh</code>
 
<pre>
#!/bin/bash  
#SBATCH -n 1 # Number of cpus to be allocated (Despite the # these SBATCH lines are compiled by the slurm manager!)
#SBATCH -N 1 # Nummber of nodes to be allocated  (You don't have to use all requisites, comment with ##)
#SBATCH -t 0-00:05 # Limit execution time (D-HH:MM)
#SBATCH -p long # Partition to be submitted
#SBATCH --qos qos_long # QOS  
    
    
  ./a.out
# Your program execution commands
./a.out
</pre>


As partições possuem um qos associado, com o mesmo nome, adicionando "qos_" a frente do nome:
In option --qos, use the partition name with "qos_" prefix:


partição: short -> qos: qos_short -> limite de 2 semanas
partition: short -> qos: qos_short -> limit  2 weeks


partição: long -> qos: qos_long -> limite de 3 meses
partition: long -> qos: qos_long -> limit de 3 month
    
    
If you run on GPU, specify the "generic resource" gpu in cluster ada:


Caso deseje rodar em gpu, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:
<pre>
  #!/bin/bash  
#!/bin/bash  
  #SBATCH -n 1 # Number of cores
#SBATCH -n 1  
  #SBATCH -N 1 # Number of nodes
#SBATCH -N 1
  #SBATCH -t 0-00:05 # Runtime in D-HH:MM
#SBATCH -t 0-00:05  
  #SBATCH -p gpu # Partition to submit to
#SBATCH -p long
  #SBATCH --qos qos_gpu # QOS
#SBATCH --qos qos_long # QOS  
  #SBATCH --gres=gpu:1
#SBATCH --gres=gpu:1
    
    
  ./a.out
# Comandos de execução do seu programa:
Para pedir alguma gpu específica, use um constraint adicionando a linha:
./a.out
  #SBATCH --constraint="gtx970"
</pre>
 
To ask for a specific gpu:
<pre>
#SBATCH --constraint="gtx970"
</pre>


To submit the job, execute:


Para submeter o job, execute o comando
<pre>
  sbatch script.sh
sbatch job.sh
</pre>


== Comandos úteis ==
== Usefull commands ==
* Para listar os jobs:
* To list jobs:
   squeue
   squeue


* Para deletar um job:
* To list all jobs running in the cluster now:
   scancel
   sudo squeue


* Para listar as partições disponíveis:
* To delete a running job:
  scancel [job_id]
 
* To list available partitions:
   sinfo
   sinfo


* Para listar as gpus presentes nos nós:
* To list gpu's in the nodes:
   sinfo -o "%N %f"
   sinfo -o "%N %f"
* To list characteristic of all nodes:
  sinfo -Nel

Edição atual tal como às 10h52min de 4 de abril de 2024

Cluster Lovelace - Instituto de Física UFRGS

The cluster is located at Instituto de Física da UFRGS, in Porto Alegre.

Management Committee


The cluster is managed by professors representing the fields of Astronomy, Theoretical Physics, and Experimental Physics, in addition to an IT department employee from the Physics Institute.

Astronomy: Rogério Riffel

Theoretical Physics: Leonardo Brunnet

Experimental Physics: Pedro Grande

TI employee: Gustavo Feller

Users Committee


Users have two channels for communication/discussion: 

1) The fis-linux-if@grupos.ufrgs.br mailing list

2) Direct messages to the IT department via the email fisica-ti@ufrgs.br.

Infraestruture

Management Software

The system of queues and scheduling of tasks is controlled by the Slurm Workload Manager.


Number of jobs per user controlled on demand.

Number of users on 1/24/2023: 150

Account request: mail to fisica-ti@ufrgs.br

Hardware in lovelace nodes

CPU: Ryzen (32 and 2*24 cores) + AMD 16 cores
RAM: 64 GB each
GPU: Three nodes with NVIDIA CUDA
Storage: storage Dell 12TB 
Conection inter-nodes: Gigabit

Installed Software

OS: Debian 12 
Basic packages installed:
gcc
gfortran
python: torch, numba
julia
conda
compucel3d
espresso
gromacs
lammps
mesa
openmpi
povray
quantum-espresso
vasp

Rules for scheduling, access control, and usage of the research infrastructure

Online scheduling

The cluster is accessible using the UFRGS virtual prived network (vpn) through server lovelace.if.ufrgs.br.

To access through a unix-like system use:

ssh <user>@lovelace.if.ufrgs.br

Under windows you may configure winscp to enter the address lovelace.if.ufrgs.br.

If you are not registered, ask for registration sending an email to fisica-ti@ufrgs.br

Using softwares in the cluster

To execute a software in a cluster job this program must:

1. Be already installed

OR

2. Be copied to the user home

Ex:

scp my_programm <user>@cluster-slurm.if.ufrgs.br:~/

If you are compiling your program in the cluster, one option is to use gcc.

Ex:

scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/
ssh <user>@cluster-slurm.if.ufrgs.br:~/
cd source-code
gcc main.c funcoes.c

This will generate file a.out, which is the executable.

Being accessible by methods 1 or 2, the program can be executed in the cluster through one JOB.

OBS: If you execute your executable without submitting as JOB, it will be executed in the server, not in the nodes. This is not recommended since the server computational capabilities are limited and you will be slowing down the server for everyone else.

Criating and executing a Job

Slurm manages jobs and each job represents a program or task being executed.

To submit a new job, you must create a script file describing the requisites and characteristics of the Job.

A typical example of the content of a submission script is below

Ex: job.sh

#!/bin/bash 
#SBATCH -n 1 # Number of cpus to be allocated (Despite the # these SBATCH lines are compiled by the slurm manager!)
#SBATCH -N 1 # Nummber of nodes to be allocated  (You don't have to use all requisites, comment with ##)
#SBATCH -t 0-00:05 # Limit execution time (D-HH:MM)
#SBATCH -p long # Partition to be submitted
#SBATCH --qos qos_long # QOS 
  
# Your program execution commands
./a.out

In option --qos, use the partition name with "qos_" prefix:

partition: short -> qos: qos_short -> limit 2 weeks

partition: long -> qos: qos_long -> limit de 3 month

If you run on GPU, specify the "generic resource" gpu in cluster ada:

#!/bin/bash 
#SBATCH -n 1 
#SBATCH -N 1
#SBATCH -t 0-00:05 
#SBATCH -p long 
#SBATCH --qos qos_long # QOS 
#SBATCH --gres=gpu:1
  
# Comandos de execução do seu programa:
./a.out

To ask for a specific gpu:

#SBATCH --constraint="gtx970"

To submit the job, execute:

sbatch job.sh

Usefull commands

  • To list jobs:
 squeue
  • To list all jobs running in the cluster now:
 sudo squeue
  • To delete a running job:
 scancel [job_id]
  • To list available partitions:
 sinfo
  • To list gpu's in the nodes:
 sinfo -o "%N %f"
  • To list characteristic of all nodes:
 sinfo -Nel