<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="pt-BR">
	<id>https://wiki.if.ufrgs.br/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Yescalianti</id>
	<title>Instituto de Física - UFRGS - Contribuições do usuário [pt-br]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.if.ufrgs.br/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Yescalianti"/>
	<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php/Especial:Contribui%C3%A7%C3%B5es/Yescalianti"/>
	<updated>2026-04-04T14:34:24Z</updated>
	<subtitle>Contribuições do usuário</subtitle>
	<generator>MediaWiki 1.39.4</generator>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1864</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1864"/>
		<updated>2018-01-30T17:52:12Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 python2&lt;br /&gt;
 python3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
OU&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso queira compilar o programa para uso no Cluster, uma das opções é usar o &amp;lt;code&amp;gt;gcc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
cd source-code&lt;br /&gt;
gcc main.c funcoes.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Isso irá gerar um arquivo &amp;lt;code&amp;gt;a.out&amp;lt;/code&amp;gt;, que é o executável.&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''generic resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os seus jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel [job_id]&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nodes:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* Para listar um resumo de todos os nodes:&lt;br /&gt;
  sinfo -Nel&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Como_restringir_login_no_Debian&amp;diff=1853</id>
		<title>Como restringir login no Debian</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Como_restringir_login_no_Debian&amp;diff=1853"/>
		<updated>2018-01-15T20:08:07Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Em &amp;lt;code&amp;gt;/etc/security/access.conf&amp;lt;/code&amp;gt;, adicionar:&lt;br /&gt;
    + : root : ALL&lt;br /&gt;
    + : (mygroup) : ALL&lt;br /&gt;
    - : ALL : ALL&lt;br /&gt;
&lt;br /&gt;
* Substitua &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; pelo grupo permitido a logar. Isso libera o login do usuário &amp;lt;code&amp;gt;root&amp;lt;/code&amp;gt; e de todos os usuários pertencentes ao grupo &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt;, bloqueando todos os outros acessos (ver &amp;lt;code&amp;gt;man access.conf&amp;lt;/code&amp;gt; para mais opções de configuração).&lt;br /&gt;
&lt;br /&gt;
É necessário informar ao PAM que o &amp;lt;code&amp;gt;access.conf&amp;lt;/code&amp;gt; deve ser verificado no login. Em /etc/pam.d/ existem vários arquivos do PAM, um para cada serviço do sistema que utiliza autenticação (ssh, telnet, cups, login gráfico, etc). Em cada serviço que se deseja restringir o login, deve ser adicionada a linha&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    account required pam_access.so&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ou seja, se o desejo é restringir acesso do grupo &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; ao login gráfico e ao ssh, essa linha deve ser adicionada nos arquivos lightdm (ou gdm), login e no arquivo sshd.&lt;br /&gt;
&lt;br /&gt;
'''Obs.:''' os arquivos common-* são incluídos em todos os outros arquivos de configuração. Não é uma boa ideia restringir o acesso por eles pois, por exemplo, o usuário gdm (ou lightdm) deve ter permissão de login na máquina para que a interface gráfica seja iniciada. Incluir a restrição no common-* impede que esse usuário faça login, pois ele não estaria no grupo cujo acesso foi permitido. Consequentemente, a interface gráfica não seria iniciada. O método ideal é incluir a restrição em cada serviço.&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1758</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1758"/>
		<updated>2017-05-09T22:12:49Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== O que é um Cluster? ==&lt;br /&gt;
&lt;br /&gt;
Cluster é um aglomerado de computadores que trabalham em conjunto realizando tarefas e balanceando cargas de processamento entre si.&lt;br /&gt;
&lt;br /&gt;
Existem vários tipos de Cluster, uns com baixo poder de processamento, e outros com alto (grandes instituições de pesquisa científica (ex: CERN) fazem muito uso desses).&lt;br /&gt;
&lt;br /&gt;
Certas tarefas (simulação de moléculas/ecossistemas, análise de dados astronômicos, etc) exigem muito processamento. E grande parte das vezes um PC comum está muito longe de realizar estas tarefas de forma rápida. Nesse caso um Cluster é necessário, pois ele oferece processamento de maior desempenho, e junto com isso, a possibilidade de adicionar mais nodes e balancear cargas dinamicamente.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 python2&lt;br /&gt;
 python3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
OU&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso queira compilar o programa para uso no Cluster, uma das opções é usar o &amp;lt;code&amp;gt;gcc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
cd source-code&lt;br /&gt;
gcc main.c funcoes.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Isso irá gerar um arquivo &amp;lt;code&amp;gt;a.out&amp;lt;/code&amp;gt;, que é o executável.&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os seus jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel [job_id]&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nodes:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* Para listar um resumo de todos os nodes:&lt;br /&gt;
  sinfo -Nel&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1755</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1755"/>
		<updated>2017-04-24T15:49:48Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== O que é um Cluster? ==&lt;br /&gt;
&lt;br /&gt;
Cluster é um aglomerado de computadores que trabalham em conjunto realizando tarefas e balanceando cargas de processamento entre si.&lt;br /&gt;
&lt;br /&gt;
Existem vários tipos de Cluster, uns com baixo poder de processamento, e outros com alto (grandes instituições de pesquisa científica (ex: CERN) fazem muito uso desses).&lt;br /&gt;
&lt;br /&gt;
Certas tarefas (simulação de moléculas/ecossistemas, análise de dados astronômicos, etc) exigem muito processamento. E grande parte das vezes um PC comum está muito longe de realizar estas tarefas de forma rápida. Nesse caso um Cluster é necessário, pois ele oferece processamento de maior desempenho, e junto com isso, a possibilidade de adicionar mais nodes e balancear cargas dinamicamente.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
OU&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso queira compilar o programa para uso no Cluster, uma das opções é usar o &amp;lt;code&amp;gt;gcc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
cd source-code&lt;br /&gt;
gcc main.c funcoes.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Isso irá gerar um arquivo &amp;lt;code&amp;gt;a.out&amp;lt;/code&amp;gt;, que é o executável.&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os seus jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel [job_id]&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nodes:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* Para listar um resumo de todos os nodes:&lt;br /&gt;
  sinfo -Nel&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1754</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1754"/>
		<updated>2017-04-24T15:47:51Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== O que é um Cluster? ==&lt;br /&gt;
&lt;br /&gt;
Cluster é um aglomerado de computadores que trabalham em conjunto realizando tarefas e balanceando cargas de processamento entre si.&lt;br /&gt;
&lt;br /&gt;
Existem vários tipos de Cluster, uns com baixo poder de processamento, e outros com alto (grandes instituições de pesquisa científica (ex: CERN) fazem muito uso desses).&lt;br /&gt;
&lt;br /&gt;
Certas tarefas (simulação de moléculas/ecossistemas, análise de dados astronômicos, etc) exigem muito processamento. E grande parte das vezes um PC comum está muito longe de realizar estas tarefas de forma rápida. Nesse caso um Cluster é necessário, pois ele oferece processamento de maior desempenho, e junto com isso, a possibilidade de adicionar mais nodes e balancear cargas dinamicamente.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
OU&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso queira compilar o programa para uso no Cluster, uma das opções é usar o &amp;lt;code&amp;gt;gcc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
cd source-code&lt;br /&gt;
gcc main.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Isso irá gerar um arquivo &amp;lt;code&amp;gt;a.out&amp;lt;/code&amp;gt;, que é o executável.&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os seus jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel [job_id]&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nodes:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* Para listar um resumo de todos os nodes:&lt;br /&gt;
  sinfo -Nel&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1753</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1753"/>
		<updated>2017-04-24T15:47:01Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== O que é um Cluster? ==&lt;br /&gt;
&lt;br /&gt;
Cluster é um aglomerado de computadores que trabalham em conjunto realizando tarefas e balanceando cargas de processamento entre si.&lt;br /&gt;
&lt;br /&gt;
Existem vários tipos de Cluster, uns com baixo poder de processamento, e outros com alto (grandes instituições de pesquisa científica (ex: CERN) fazem muito uso desses).&lt;br /&gt;
&lt;br /&gt;
Certas tarefas (simulação de moléculas/ecossistemas, análise de dados astronômicos, etc) exigem muito processamento. E grande parte das vezes um PC comum está muito longe de realizar estas tarefas de forma rápida. Nesse caso um Cluster é necessário, pois ele oferece processamento de maior desempenho, e junto com isso, a possibilidade de adicionar mais nodes e balancear cargas dinamicamente.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 docker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
OU&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso queira compilar o programa para uso no Cluster, uma das opções é usar o &amp;lt;code&amp;gt;gcc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
cd source-code&lt;br /&gt;
gcc main.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Isso irá gerar um arquivo &amp;lt;code&amp;gt;a.out&amp;lt;/code&amp;gt;, que é o executável.&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os seus jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel [job_id]&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nodes:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* Para listar um resumo de todos os nodes:&lt;br /&gt;
  sinfo -Nel&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1752</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1752"/>
		<updated>2017-04-24T15:38:10Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 docker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
OU&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso queira compilar o programa para uso no Cluster, uma das opções é usar o &amp;lt;code&amp;gt;gcc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
cd source-code&lt;br /&gt;
gcc main.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Isso irá gerar um arquivo &amp;lt;code&amp;gt;a.out&amp;lt;/code&amp;gt;, que é o executável.&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os seus jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel [job_id]&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nodes:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* Para listar um resumo de todos os nodes:&lt;br /&gt;
  sinfo -Nel&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1751</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1751"/>
		<updated>2017-04-24T15:37:37Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 docker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
OU&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso queira compilar o programa para uso no Cluster, uma das opções é usar o &amp;lt;code&amp;gt;gcc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
cd source-code&lt;br /&gt;
gcc main.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Isso irá gerar um arquivo &amp;lt;code&amp;gt;a.out&amp;lt;/code&amp;gt;, que é o executável.&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os seus jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel [job_id]&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nós:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* Para listar um resumo de todos os nós:&lt;br /&gt;
  sinfo -Nel&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1750</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1750"/>
		<updated>2017-04-24T15:35:18Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 docker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
OU&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso queira compilar o programa para uso no Cluster, uma das opções é usar o &amp;lt;code&amp;gt;gcc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -r source-code/ usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
cd source-code&lt;br /&gt;
gcc main.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Isso irá gerar um arquivo &amp;lt;code&amp;gt;a.out&amp;lt;/code&amp;gt;, que é o executável.&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nós:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1749</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1749"/>
		<updated>2017-04-24T15:27:03Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 docker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
OU&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nós:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1748</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1748"/>
		<updated>2017-04-24T15:24:25Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 8 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 docker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
ou&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nós:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1747</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1747"/>
		<updated>2017-04-24T15:23:48Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 16 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 docker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
ou&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando e executando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nós:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1746</id>
		<title>Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Cluster&amp;diff=1746"/>
		<updated>2017-04-24T15:23:11Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Cluster Ada - Instituto de Física UFRGS =&lt;br /&gt;
&lt;br /&gt;
O Cluster está localizado no Instituto de Física da UFRGS, em Porto Alegre.&lt;br /&gt;
&lt;br /&gt;
== Infraestrutura ==&lt;br /&gt;
&lt;br /&gt;
=== Software de gerenciamento ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Slurm Workload Manager&lt;br /&gt;
&lt;br /&gt;
Site :https://slurm.schedmd.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware dos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CPU: x86_64&lt;br /&gt;
RAM: varia entre 4 GB - 16 GB&lt;br /&gt;
GPU: alguns nodes possuem NVIDIA CUDA&lt;br /&gt;
Storage: storage em rede com quota de 50 GB por usuário, os nodes não possuem HD local &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software nos nodes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
OS: Debian 8 (Jessie) x86_64&lt;br /&gt;
Pacotes instalados:&lt;br /&gt;
 gcc&lt;br /&gt;
 docker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Como utilizar ==&lt;br /&gt;
&lt;br /&gt;
=== Conectar-se ao cluster-slurm ===&lt;br /&gt;
&lt;br /&gt;
O cluster é acessível através do server cluster-slurm. Para acessar o server via SSH, use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh usuario@cluster-slurm.if.ufrgs.br&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Caso você não tenha cadastro ou não é vinculado ao Instituto de Física, solicite o cadastro enviando um email para fisica-ti@ufrgs.br.&lt;br /&gt;
&lt;br /&gt;
=== Utilizando softwares no Cluster ===&lt;br /&gt;
&lt;br /&gt;
Para que seja possível executar um programa em um job no cluster, o programa deve:&lt;br /&gt;
&lt;br /&gt;
1. Já estar instalado&lt;br /&gt;
&lt;br /&gt;
ou&lt;br /&gt;
&lt;br /&gt;
2. Ser copiado para sua home (pasta do seu usuário)&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp meu_executavel usuario@cluster-slurm.if.ufrgs.br:~/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Estando acessível pelo método 1 ou 2, o programa pode ser executado no Cluster através de um &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
OBS: Caso você execute o programa sem submetê-lo como &amp;lt;strong&amp;gt;JOB&amp;lt;/strong&amp;gt;, ele não será executado nos nodes, e sim apenas no próprio server (cluster-slurm), que possui capacidades bem limitadas de processamento.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Criando um Job ===&lt;br /&gt;
&lt;br /&gt;
O Slurm gerencia jobs, e cada job representa um programa ou tarefa sendo executado.&lt;br /&gt;
&lt;br /&gt;
Para submeter um novo Job, deve-se criar um arquivo de script descrevendo os requisitos e características de execução do Job.&lt;br /&gt;
&lt;br /&gt;
Formato do arquivo abaixo.&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Na opção --qos, deve-se colocar o nome da partição com o prefixo &amp;quot;qos_&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
partição: short -&amp;gt; qos: qos_short -&amp;gt; limite de 2 semanas&lt;br /&gt;
&lt;br /&gt;
partição: long -&amp;gt; qos: qos_long -&amp;gt; limite de 3 meses&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Caso deseje rodar em GPU, é necessário especificar a fila e pedir explicitamente a ''gereric resource'' gpu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
#SBATCH -n 1 # Numero de CPU cores a serem alocados &lt;br /&gt;
#SBATCH -N 1 # Numero de nodes a serem alocados&lt;br /&gt;
#SBATCH -t 0-00:05 # Tempo limite de execucao (D-HH:MM)&lt;br /&gt;
#SBATCH -p long # Particao (fila) a ser submetido&lt;br /&gt;
#SBATCH --qos qos_long # QOS &lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
  &lt;br /&gt;
# Comandos de execução do seu programa:&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para pedir alguma GPU específica, use um constraint adicionando a linha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;quot;gtx970&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Para submeter o job, execute o comando &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comandos úteis ==&lt;br /&gt;
* Para listar os jobs:&lt;br /&gt;
  squeue&lt;br /&gt;
&lt;br /&gt;
* Para deletar um job:&lt;br /&gt;
  scancel&lt;br /&gt;
&lt;br /&gt;
* Para listar as partições disponíveis:&lt;br /&gt;
  sinfo&lt;br /&gt;
&lt;br /&gt;
* Para listar as gpus presentes nos nós:&lt;br /&gt;
  sinfo -o &amp;quot;%N %f&amp;quot;&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Clusters&amp;diff=1581</id>
		<title>Clusters</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Clusters&amp;diff=1581"/>
		<updated>2016-07-27T13:23:11Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Cluster Atuais:&lt;br /&gt;
&lt;br /&gt;
* Pcapg&lt;br /&gt;
* [http://ada.if.ufrgs.br Ada]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
*[[cluster_comandos|Comandos]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Novo Cluster (slurm):&lt;br /&gt;
&lt;br /&gt;
*[[slurm_comandos|Comandos Slurm]]&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Instru%C3%A7%C3%B5es_para_instalar_o_LDAP_Client_(login_pelo_LDAP)&amp;diff=1527</id>
		<title>Instruções para instalar o LDAP Client (login pelo LDAP)</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Instru%C3%A7%C3%B5es_para_instalar_o_LDAP_Client_(login_pelo_LDAP)&amp;diff=1527"/>
		<updated>2016-06-14T14:03:43Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Instalação do LDAP Client (opção 1):&lt;br /&gt;
&lt;br /&gt;
   1)&lt;br /&gt;
   # apt-get install libpam-ldapd libnss-ldapd&lt;br /&gt;
   -&amp;gt; Caso queira reutilizar a configuração de outro PC, use as configurações padrão e pule para o passo 3.&lt;br /&gt;
   &lt;br /&gt;
   -&amp;gt; Configure o que for pedido na instalação.&lt;br /&gt;
      &lt;br /&gt;
   2)&lt;br /&gt;
   Editar /etc/nslcd.conf e alterar os dados de bind.&lt;br /&gt;
   &lt;br /&gt;
   3) Caso queira copiar as configurações de outro PC, basta obter os seguintes arquivos de um PC já configurado e copiá-los para o novo PC:&lt;br /&gt;
     - /etc/nsswitch.conf&lt;br /&gt;
     - /etc/nslcd.conf&lt;br /&gt;
      &lt;br /&gt;
   4)&lt;br /&gt;
   # /etc/init.d/nslcd restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Instalação do LDAP Client (opção 2):&lt;br /&gt;
&lt;br /&gt;
   # apt-get install libnss-ldap libpam-ldap nscd&lt;br /&gt;
   O programa de instalação talvez peça as seguintes configurações:&lt;br /&gt;
   &lt;br /&gt;
   LDAP server URI: ldap://[server adress]&lt;br /&gt;
   Distinguished name of the search base: [search base (ex: ou=company,dc=com)]&lt;br /&gt;
   LDAP version to use: 3&lt;br /&gt;
   Does the LDAP database require login? Yes&lt;br /&gt;
   Special LDAP privileges for root? No&lt;br /&gt;
   Make the configuration file readable/writeable by its owner only? Yes&lt;br /&gt;
   Allow LDAP admin account to behave like local root? No&lt;br /&gt;
   Unprivileged database user: [login com permissões read-only]&lt;br /&gt;
   Unprivileged database user password: [senha do login acima]&lt;br /&gt;
   &lt;br /&gt;
   IMPORTANTE:&lt;br /&gt;
   -&amp;gt; Editar /etc/nsswitch.conf e adicionar &amp;quot;ldap&amp;quot; nos campos &amp;quot;passwd&amp;quot;, &amp;quot;group&amp;quot;, e &amp;quot;shadow&amp;quot;&lt;br /&gt;
   &lt;br /&gt;
   # dpkg-reconfigure libnss-ldap //configurar o libnss-ldap com as configurações acima&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Instru%C3%A7%C3%B5es_para_instalar_o_LDAP_Client_(login_pelo_LDAP)&amp;diff=1514</id>
		<title>Instruções para instalar o LDAP Client (login pelo LDAP)</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Instru%C3%A7%C3%B5es_para_instalar_o_LDAP_Client_(login_pelo_LDAP)&amp;diff=1514"/>
		<updated>2016-06-01T20:31:29Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Instalação do LDAP Client (opção 1):&lt;br /&gt;
&lt;br /&gt;
   1)&lt;br /&gt;
   # apt-get install libpam-ldapd libnss-ldapd&lt;br /&gt;
   OBS: Caso queira reutilizar a configuração de outro PC, pule as configurações que aparecem na instalação e vá para o passo 3.&lt;br /&gt;
   &lt;br /&gt;
   -&amp;gt; Configure o que for pedido na instalação.&lt;br /&gt;
      &lt;br /&gt;
   2)&lt;br /&gt;
   Editar /etc/nslcd.conf e alterar os dados de bind.&lt;br /&gt;
   &lt;br /&gt;
   3) Caso queira copiar as configurações de outro PC, basta obter os seguintes arquivos de um PC já configurado e copiá-los para o novo PC:&lt;br /&gt;
     - /etc/nsswitch.conf&lt;br /&gt;
     - /etc/nslcd.conf&lt;br /&gt;
      &lt;br /&gt;
   4)&lt;br /&gt;
   # /etc/init.d/nslcd restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Instalação do LDAP Client (opção 2):&lt;br /&gt;
&lt;br /&gt;
   # apt-get install libnss-ldap libpam-ldap nscd&lt;br /&gt;
   O programa de instalação talvez peça as seguintes configurações:&lt;br /&gt;
   &lt;br /&gt;
   LDAP server URI: ldap://[server adress]&lt;br /&gt;
   Distinguished name of the search base: [search base (ex: ou=company,dc=com)]&lt;br /&gt;
   LDAP version to use: 3&lt;br /&gt;
   Does the LDAP database require login? Yes&lt;br /&gt;
   Special LDAP privileges for root? No&lt;br /&gt;
   Make the configuration file readable/writeable by its owner only? Yes&lt;br /&gt;
   Allow LDAP admin account to behave like local root? No&lt;br /&gt;
   Unprivileged database user: [login com permissões read-only]&lt;br /&gt;
   Unprivileged database user password: [senha do login acima]&lt;br /&gt;
   &lt;br /&gt;
   IMPORTANTE:&lt;br /&gt;
   -&amp;gt; Editar /etc/nsswitch.conf e adicionar &amp;quot;ldap&amp;quot; nos campos &amp;quot;passwd&amp;quot;, &amp;quot;group&amp;quot;, e &amp;quot;shadow&amp;quot;&lt;br /&gt;
   &lt;br /&gt;
   # dpkg-reconfigure libnss-ldap //configurar o libnss-ldap com as configurações acima&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Instru%C3%A7%C3%B5es_para_instalar_o_LDAP_Client_(login_pelo_LDAP)&amp;diff=1513</id>
		<title>Instruções para instalar o LDAP Client (login pelo LDAP)</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Instru%C3%A7%C3%B5es_para_instalar_o_LDAP_Client_(login_pelo_LDAP)&amp;diff=1513"/>
		<updated>2016-06-01T20:31:16Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Instalação do LDAP Client (opção 1):&lt;br /&gt;
&lt;br /&gt;
   1)&lt;br /&gt;
   # apt-get install libpam-ldapd libnss-ldapd&lt;br /&gt;
   OBS: Caso queira reutilizar a configuração de outro PC, pule as configurações que aparecem na instalação e vá para o passo 3.&lt;br /&gt;
   &lt;br /&gt;
   -&amp;gt; Configure o que for pedido na instalação.&lt;br /&gt;
      &lt;br /&gt;
   2)&lt;br /&gt;
   Editar /etc/nslcd.conf e alterar os dados de bind.&lt;br /&gt;
   &lt;br /&gt;
   3) Caso queira copiar as configurações de outro PC, basta obter os seguintes arquivos de um PC já configurado e copiá-los para o novo PC:&lt;br /&gt;
     - /etc/nsswitch.conf&lt;br /&gt;
     - /etc/nslcd.conf&lt;br /&gt;
      &lt;br /&gt;
   4)&lt;br /&gt;
    # /etc/init.d/nslcd restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Instalação do LDAP Client (opção 2):&lt;br /&gt;
&lt;br /&gt;
   # apt-get install libnss-ldap libpam-ldap nscd&lt;br /&gt;
   O programa de instalação talvez peça as seguintes configurações:&lt;br /&gt;
   &lt;br /&gt;
   LDAP server URI: ldap://[server adress]&lt;br /&gt;
   Distinguished name of the search base: [search base (ex: ou=company,dc=com)]&lt;br /&gt;
   LDAP version to use: 3&lt;br /&gt;
   Does the LDAP database require login? Yes&lt;br /&gt;
   Special LDAP privileges for root? No&lt;br /&gt;
   Make the configuration file readable/writeable by its owner only? Yes&lt;br /&gt;
   Allow LDAP admin account to behave like local root? No&lt;br /&gt;
   Unprivileged database user: [login com permissões read-only]&lt;br /&gt;
   Unprivileged database user password: [senha do login acima]&lt;br /&gt;
   &lt;br /&gt;
   IMPORTANTE:&lt;br /&gt;
   -&amp;gt; Editar /etc/nsswitch.conf e adicionar &amp;quot;ldap&amp;quot; nos campos &amp;quot;passwd&amp;quot;, &amp;quot;group&amp;quot;, e &amp;quot;shadow&amp;quot;&lt;br /&gt;
   &lt;br /&gt;
   # dpkg-reconfigure libnss-ldap //configurar o libnss-ldap com as configurações acima&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Instru%C3%A7%C3%B5es_para_instalar_o_LDAP_Client_(login_pelo_LDAP)&amp;diff=1512</id>
		<title>Instruções para instalar o LDAP Client (login pelo LDAP)</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Instru%C3%A7%C3%B5es_para_instalar_o_LDAP_Client_(login_pelo_LDAP)&amp;diff=1512"/>
		<updated>2016-06-01T20:30:51Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Instalação do LDAP Client (opção 1):&lt;br /&gt;
&lt;br /&gt;
   1)&lt;br /&gt;
   # apt-get install libpam-ldapd libnss-ldapd&lt;br /&gt;
   OBS: Caso queira reutilizar a configuração de outro PC, pule as configurações que aparecem na instalação e vá para o passo 3.&lt;br /&gt;
   &lt;br /&gt;
   -&amp;gt; Configure o que for pedido na instalação.&lt;br /&gt;
      &lt;br /&gt;
   2)&lt;br /&gt;
   Editar /etc/nslcd.conf e alterar os dados de bind.&lt;br /&gt;
   &lt;br /&gt;
   3) Caso queira copiar as configurações de outro PC, basta obter os seguintes arquivos de um PC já configurado e copiá-los para o novo PC:&lt;br /&gt;
     - /etc/nsswitch.conf&lt;br /&gt;
     - /etc/nslcd.conf&lt;br /&gt;
      &lt;br /&gt;
    # /etc/init.d/nslcd restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Instalação do LDAP Client (opção 2):&lt;br /&gt;
&lt;br /&gt;
   # apt-get install libnss-ldap libpam-ldap nscd&lt;br /&gt;
   O programa de instalação talvez peça as seguintes configurações:&lt;br /&gt;
   &lt;br /&gt;
   LDAP server URI: ldap://[server adress]&lt;br /&gt;
   Distinguished name of the search base: [search base (ex: ou=company,dc=com)]&lt;br /&gt;
   LDAP version to use: 3&lt;br /&gt;
   Does the LDAP database require login? Yes&lt;br /&gt;
   Special LDAP privileges for root? No&lt;br /&gt;
   Make the configuration file readable/writeable by its owner only? Yes&lt;br /&gt;
   Allow LDAP admin account to behave like local root? No&lt;br /&gt;
   Unprivileged database user: [login com permissões read-only]&lt;br /&gt;
   Unprivileged database user password: [senha do login acima]&lt;br /&gt;
   &lt;br /&gt;
   IMPORTANTE:&lt;br /&gt;
   -&amp;gt; Editar /etc/nsswitch.conf e adicionar &amp;quot;ldap&amp;quot; nos campos &amp;quot;passwd&amp;quot;, &amp;quot;group&amp;quot;, e &amp;quot;shadow&amp;quot;&lt;br /&gt;
   &lt;br /&gt;
   # dpkg-reconfigure libnss-ldap //configurar o libnss-ldap com as configurações acima&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Port_forwarding_LDAP&amp;diff=1511</id>
		<title>Port forwarding LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Port_forwarding_LDAP&amp;diff=1511"/>
		<updated>2016-05-31T16:16:20Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Debian 8'''&lt;br /&gt;
&lt;br /&gt;
Assuming:&lt;br /&gt;
&lt;br /&gt;
 eth0: network to be redirected (doesn't have a direct connection to the LDAP server)&lt;br /&gt;
 [ldap_server]: IP address of the LDAP server&lt;br /&gt;
 389: LDAP authentication port&lt;br /&gt;
&lt;br /&gt;
In the host (let's call it &amp;quot;Master host&amp;quot;) that has access to both networks (eth0 and the LDAP's network), you can apply:&lt;br /&gt;
&lt;br /&gt;
    # iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 389 -j DNAT --to [ldap_server]:389&lt;br /&gt;
    # iptables -A FORWARD -p tcp -d [ldap_server] --dport 389 -j ACCEPT&lt;br /&gt;
    # iptables -t nat -A POSTROUTING -d [ldap_server] -j MASQUERADE&lt;br /&gt;
    # echo &amp;quot;1&amp;quot; &amp;gt; /proc/sys/net/ipv4/ip_forward&lt;br /&gt;
&lt;br /&gt;
To save (make permanent) the settings:&lt;br /&gt;
    # iptables-save &amp;gt; /etc/iptables.up.rules&lt;br /&gt;
''Add these 2 lines to /etc/network/if-pre-up.d/iptables'':&lt;br /&gt;
     #!/bin/sh&lt;br /&gt;
     /sbin/iptables-restore &amp;lt; /etc/iptables.up.rules&lt;br /&gt;
''Add this in /etc/rc.local (before the exit 0)'':&lt;br /&gt;
     echo &amp;quot;1&amp;quot; &amp;gt; /proc/sys/net/ipv4/ip_forward&lt;br /&gt;
&lt;br /&gt;
All hosts in eth0's subnet will have to use the Master's IP address instead of the LDAP server address.&lt;br /&gt;
So when you want to authenticate, you use your Master and your Master forwards the connection to the LDAP server.&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Port_forwarding_LDAP&amp;diff=1510</id>
		<title>Port forwarding LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Port_forwarding_LDAP&amp;diff=1510"/>
		<updated>2016-05-31T16:16:11Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Debian 8'''&lt;br /&gt;
&lt;br /&gt;
Assuming:&lt;br /&gt;
&lt;br /&gt;
eth0: network to be redirected (doesn't have a direct connection to the LDAP server)&lt;br /&gt;
[ldap_server]: IP address of the LDAP server&lt;br /&gt;
389: LDAP authentication port&lt;br /&gt;
&lt;br /&gt;
In the host (let's call it &amp;quot;Master host&amp;quot;) that has access to both networks (eth0 and the LDAP's network), you can apply:&lt;br /&gt;
&lt;br /&gt;
    # iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 389 -j DNAT --to [ldap_server]:389&lt;br /&gt;
    # iptables -A FORWARD -p tcp -d [ldap_server] --dport 389 -j ACCEPT&lt;br /&gt;
    # iptables -t nat -A POSTROUTING -d [ldap_server] -j MASQUERADE&lt;br /&gt;
    # echo &amp;quot;1&amp;quot; &amp;gt; /proc/sys/net/ipv4/ip_forward&lt;br /&gt;
&lt;br /&gt;
To save (make permanent) the settings:&lt;br /&gt;
    # iptables-save &amp;gt; /etc/iptables.up.rules&lt;br /&gt;
''Add these 2 lines to /etc/network/if-pre-up.d/iptables'':&lt;br /&gt;
     #!/bin/sh&lt;br /&gt;
     /sbin/iptables-restore &amp;lt; /etc/iptables.up.rules&lt;br /&gt;
''Add this in /etc/rc.local (before the exit 0)'':&lt;br /&gt;
     echo &amp;quot;1&amp;quot; &amp;gt; /proc/sys/net/ipv4/ip_forward&lt;br /&gt;
&lt;br /&gt;
All hosts in eth0's subnet will have to use the Master's IP address instead of the LDAP server address.&lt;br /&gt;
So when you want to authenticate, you use your Master and your Master forwards the connection to the LDAP server.&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Port_forwarding_LDAP&amp;diff=1509</id>
		<title>Port forwarding LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Port_forwarding_LDAP&amp;diff=1509"/>
		<updated>2016-05-31T16:15:53Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Debian 8'''&lt;br /&gt;
&lt;br /&gt;
Assuming:&lt;br /&gt;
&lt;br /&gt;
 eth0: network to be redirected (doesn't have a direct connection to the LDAP server)&lt;br /&gt;
 [ldap_server]: IP address of the LDAP server&lt;br /&gt;
 389: LDAP authentication port&lt;br /&gt;
&lt;br /&gt;
In the host (let's call it &amp;quot;Master host&amp;quot;) that has access to both networks (eth0 and the LDAP's network), you can apply:&lt;br /&gt;
&lt;br /&gt;
    # iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 389 -j DNAT --to [ldap_server]:389&lt;br /&gt;
    # iptables -A FORWARD -p tcp -d [ldap_server] --dport 389 -j ACCEPT&lt;br /&gt;
    # iptables -t nat -A POSTROUTING -d [ldap_server] -j MASQUERADE&lt;br /&gt;
    # echo &amp;quot;1&amp;quot; &amp;gt; /proc/sys/net/ipv4/ip_forward&lt;br /&gt;
&lt;br /&gt;
To save (make permanent) the settings:&lt;br /&gt;
    # iptables-save &amp;gt; /etc/iptables.up.rules&lt;br /&gt;
''Add these 2 lines to /etc/network/if-pre-up.d/iptables'':&lt;br /&gt;
     #!/bin/sh&lt;br /&gt;
     /sbin/iptables-restore &amp;lt; /etc/iptables.up.rules&lt;br /&gt;
''Add this in /etc/rc.local (before the exit 0)'':&lt;br /&gt;
     echo &amp;quot;1&amp;quot; &amp;gt; /proc/sys/net/ipv4/ip_forward&lt;br /&gt;
&lt;br /&gt;
All hosts in eth0's subnet will have to use the Master's IP address instead of the LDAP server address.&lt;br /&gt;
So when you want to authenticate, you use your Master and your Master forwards the connection to the LDAP server.&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1508</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1508"/>
		<updated>2016-05-30T21:38:33Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) and Maui 3.3 in Debian 8 (Jessie) x86_64&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    -&amp;gt; If your nodes are diskless, run:&lt;br /&gt;
     # ./configure --disable-spool --disable-mom-checkspool&lt;br /&gt;
       Otherwise run:&lt;br /&gt;
     # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # mkdir /usr/lib/systemd/system/&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually (you can add it in /etc/rc.local)&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@&amp;lt;mom-node&amp;gt;&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
'''MAUI (IN THE SERVER)'''&lt;br /&gt;
&lt;br /&gt;
  ''Get the source and compile it''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;maui-source-code-url&amp;gt; -O maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd maui-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    (./configure should take care of the proper configs (like detecting the PBS installation))&lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; You have to start maui manually (&amp;quot;/usr/local/maui/sbin/maui&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; If you want to start maui on boot, add &amp;quot;/usr/local/maui/sbin/maui&amp;quot; to /etc/rc.local&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;br /&gt;
&lt;br /&gt;
For more information about Maui, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/maui/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1499</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1499"/>
		<updated>2016-05-25T17:45:20Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) and Maui 3.3 in Debian 8 (Jessie) x86_64&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    -&amp;gt; If your nodes are diskless, run:&lt;br /&gt;
     # ./configure --disable-spool --disable-mom-checkspool&lt;br /&gt;
       Otherwise run:&lt;br /&gt;
     # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually (you can add it in /etc/rc.local)&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@&amp;lt;mom-node&amp;gt;&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
'''MAUI (IN THE SERVER)'''&lt;br /&gt;
&lt;br /&gt;
  ''Get the source and compile it''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;maui-source-code-url&amp;gt; -O maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd maui-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    (./configure should take care of the proper configs (like detecting the PBS installation))&lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; You have to start maui manually (&amp;quot;/usr/local/maui/sbin/maui&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; If you want to start maui on boot, add &amp;quot;/usr/local/maui/sbin/maui&amp;quot; to /etc/rc.local&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;br /&gt;
&lt;br /&gt;
For more information about Maui, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/maui/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Port_forwarding_LDAP&amp;diff=1498</id>
		<title>Port forwarding LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Port_forwarding_LDAP&amp;diff=1498"/>
		<updated>2016-05-18T19:34:58Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: Criou página com 'Assuming:   eth0: network to be redirected (doesn't have a direct connection to the LDAP server)  [ldap_server]: IP address of the LDAP server  389: LDAP authentication port...'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Assuming:&lt;br /&gt;
&lt;br /&gt;
 eth0: network to be redirected (doesn't have a direct connection to the LDAP server)&lt;br /&gt;
 [ldap_server]: IP address of the LDAP server&lt;br /&gt;
 389: LDAP authentication port&lt;br /&gt;
&lt;br /&gt;
In the host (let's call it &amp;quot;Master host&amp;quot;) that has access to both networks (eth0 and the LDAP's network), you can apply:&lt;br /&gt;
&lt;br /&gt;
    iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 389 -j DNAT --to [ldap_server]:389&lt;br /&gt;
    iptables -A FORWARD -p tcp -d [ldap_server] --dport 389 -j ACCEPT&lt;br /&gt;
    iptables -t nat -A POSTROUTING -d [ldap_server] -j MASQUERADE&lt;br /&gt;
    echo &amp;quot;1&amp;quot; &amp;gt; /proc/sys/net/ipv4/ip_forward&lt;br /&gt;
&lt;br /&gt;
All hosts in eth0's subnet will have to use the Master's IP address instead of the LDAP server address.&lt;br /&gt;
So when you want to authenticate, you use your Master and your Master forwards the connection to the LDAP server.&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=LDAP&amp;diff=1497</id>
		<title>LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=LDAP&amp;diff=1497"/>
		<updated>2016-05-18T19:28:30Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;li&amp;gt;[[Problema ao mudar de senha]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;[[Instruções para adicionar atributos no LDAP]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;[[Como clonar o banco de dados do servidor LDAP (slapd)]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;[[Como bloquear bind anônimo no slapd (para impedir ldapsearch sem autenticação)]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;[[Instruções para instalar o LDAP Client (login pelo LDAP)]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;[[Como restringir login no Debian]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;[[Port forwarding LDAP]]&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1496</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1496"/>
		<updated>2016-05-18T15:49:23Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) and Maui 3.3 in Debian 8 (Jessie) x86_64&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    -&amp;gt; If your nodes are diskless, run:&lt;br /&gt;
     # ./configure --disable-spool --disable-mom-checkspool&lt;br /&gt;
       Otherwise run:&lt;br /&gt;
     # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@&amp;lt;mom-node&amp;gt;&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
'''MAUI (IN THE SERVER)'''&lt;br /&gt;
&lt;br /&gt;
  ''Get the source and compile it''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;maui-source-code-url&amp;gt; -O maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd maui-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    (./configure should take care of the proper configs (like detecting the PBS installation))&lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; You have to start maui manually (&amp;quot;/usr/local/maui/sbin/maui&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; If you want to start maui on boot, add &amp;quot;/usr/local/maui/sbin/maui&amp;quot; to /etc/rc.local&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;br /&gt;
&lt;br /&gt;
For more information about Maui, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/maui/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1495</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1495"/>
		<updated>2016-05-18T15:48:45Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) and Maui 3.3 in Debian 8 (Jessie) x86_64&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    -&amp;gt; If your nodes are diskless, run:&lt;br /&gt;
     # ./configure --disable-spool --disable-mom-checkspool&lt;br /&gt;
       Otherwise run:&lt;br /&gt;
     # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@&amp;lt;mom-node&amp;gt;&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
'''MAUI (IN THE SERVER)'''&lt;br /&gt;
&lt;br /&gt;
  ''Get the source and compile it''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;maui-source-code-url&amp;gt; -O maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd maui-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    (./configure should take care of the proper configs (like detecting the PBS installation))&lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; You have to start maui manually (&amp;quot;/usr/local/maui/sbin/maui&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; If you want to start maui on boot, add &amp;quot;/usr/local/maui/sbin/maui&amp;quot; to /etc/rc.local&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1494</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1494"/>
		<updated>2016-05-18T15:48:18Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) and Maui 3.3 in Debian 8 (Jessie) x86_64&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    -&amp;gt; If your nodes are diskless, run:&lt;br /&gt;
     # ./configure --disable-spool --disable-mom-checkspool&lt;br /&gt;
       Otherwise run:&lt;br /&gt;
     # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@&amp;lt;mom-node&amp;gt;&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
'''MAUI (IN THE SERVER)'''&lt;br /&gt;
&lt;br /&gt;
  ''Get the source and compile it''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;maui-source-code-url&amp;gt; -O maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf maui-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd maui-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    (./configure should take care of the proper configs (like detecting the PBS installation)&lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; You have to start maui manually (&amp;quot;/usr/local/maui/sbin/maui&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; If you want to start maui on boot, add &amp;quot;/usr/local/maui/sbin/maui&amp;quot; to /etc/rc.local&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1493</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1493"/>
		<updated>2016-05-17T18:41:26Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie) x86_64&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    -&amp;gt; If your nodes are diskless, run:&lt;br /&gt;
     # ./configure --disable-spool --disable-mom-checkspool&lt;br /&gt;
       Otherwise run:&lt;br /&gt;
     # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@&amp;lt;mom-node&amp;gt;&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1492</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1492"/>
		<updated>2016-05-16T20:36:04Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    -&amp;gt; If your nodes are diskless, run:&lt;br /&gt;
     # ./configure --disable-spool --disable-mom-checkspool&lt;br /&gt;
       Otherwise run:&lt;br /&gt;
     # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@&amp;lt;mom-node&amp;gt;&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1491</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1491"/>
		<updated>2016-05-16T20:20:58Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    -&amp;gt; If your nodes are diskless, use:&lt;br /&gt;
     # ./configure --disable-spool --disable-mom-checkspool&lt;br /&gt;
       Otherwise use:&lt;br /&gt;
     # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@&amp;lt;mom-node&amp;gt;&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1490</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1490"/>
		<updated>2016-05-13T20:25:19Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@&amp;lt;mom-node&amp;gt;&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1489</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1489"/>
		<updated>2016-05-13T20:02:00Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist)''&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1488</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1488"/>
		<updated>2016-05-13T20:01:37Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    &lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist'':&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1487</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1487"/>
		<updated>2016-05-13T20:01:26Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
  ''(in the node, create the directory &amp;quot;/usr/lib/systemd/system/&amp;quot; if it does not exist'':&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1486</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1486"/>
		<updated>2016-05-13T19:32:43Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    # apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1485</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1485"/>
		<updated>2016-05-13T19:32:27Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    # apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1484</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1484"/>
		<updated>2016-05-13T19:32:14Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)'':    &lt;br /&gt;
    apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1483</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1483"/>
		<updated>2016-05-13T19:32:01Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''Install dependencies (Node)''&lt;br /&gt;
    &lt;br /&gt;
    apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1482</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1482"/>
		<updated>2016-05-13T19:31:43Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (Server)'''&lt;br /&gt;
&lt;br /&gt;
    apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
&lt;br /&gt;
  ''Install dependencies (Node)''&lt;br /&gt;
&lt;br /&gt;
    apt-get install libssl-dev libxml2-dev&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1481</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1481"/>
		<updated>2016-05-13T19:17:43Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Install dependencies (both Server and nodes)'''&lt;br /&gt;
&lt;br /&gt;
    apt-get install libtool libssl-dev libxml2-dev libboost-dev build-essential&lt;br /&gt;
&lt;br /&gt;
'''Get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1480</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1480"/>
		<updated>2016-05-13T19:15:57Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8 (Jessie)&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''First, get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1479</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1479"/>
		<updated>2016-05-13T19:15:47Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in Debian 8&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''First, get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1478</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1478"/>
		<updated>2016-05-13T17:02:11Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in systemctl-based Linux systems&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''First, get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1477</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1477"/>
		<updated>2016-05-13T17:01:53Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in systemctl-based Linux systems&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''First, get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    # qmgr -c &amp;quot;set server scheduling = True&amp;quot;&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1476</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1476"/>
		<updated>2016-05-13T16:58:10Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.1 (Server and Mom) in systemctl-based Linux systems&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''First, get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1475</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1475"/>
		<updated>2016-05-13T16:57:14Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque 6.0.x (Server and Mom) in systemctl-based Linux systems&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''First, get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1474</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1474"/>
		<updated>2016-05-13T16:56:42Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque Server and Mom in systemctl-based Linux systems&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''First, get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;br /&gt;
&lt;br /&gt;
The information in this page is based on this document:&lt;br /&gt;
&lt;br /&gt;
http://docs.adaptivecomputing.com/torque/6-0-1/help.htm&lt;br /&gt;
&lt;br /&gt;
For more information about Torque, please visit:&lt;br /&gt;
&lt;br /&gt;
http://www.adaptivecomputing.com/products/open-source/torque/&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1473</id>
		<title>Install torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Install_torque&amp;diff=1473"/>
		<updated>2016-05-13T16:52:03Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: Criou página com '&amp;lt;h2&amp;gt;How to install Torque Server and Mom in systemctl-based Linux systems&amp;lt;/h2&amp;gt;  '''First, get the source and compile it'''        # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;v...'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;How to install Torque Server and Mom in systemctl-based Linux systems&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''First, get the source and compile it'''&lt;br /&gt;
  &lt;br /&gt;
    # wget &amp;lt;torques-source-code-url&amp;gt; -O torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # tar -xzvf torque-&amp;lt;version&amp;gt;.tar.gz&lt;br /&gt;
    # cd torque-&amp;lt;version&amp;gt;/ &lt;br /&gt;
    # ./configure&lt;br /&gt;
    # make&lt;br /&gt;
    # make install&lt;br /&gt;
&lt;br /&gt;
'''TORQUE SERVER'''&lt;br /&gt;
&lt;br /&gt;
    # echo &amp;lt;torque_server_hostname&amp;gt; &amp;gt; /var/spool/torque/server_name&lt;br /&gt;
    &lt;br /&gt;
  ''trqauthd'':&lt;br /&gt;
    # cp contrib/systemd/trqauthd.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable trqauthd.service&lt;br /&gt;
    # echo /usr/local/lib &amp;gt; /etc/ld.so.conf.d/torque.conf&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl start trqauthd.service&lt;br /&gt;
        &lt;br /&gt;
    # export PATH=/usr/local/bin/:/usr/local/sbin/:$PATH&lt;br /&gt;
    &lt;br /&gt;
  ''Initial setup'':&lt;br /&gt;
    # ./torque.setup root&lt;br /&gt;
    &lt;br /&gt;
  ''Node list'':&lt;br /&gt;
    -&amp;gt; Add nodes to /var/spool/torque/server_priv/nodes&lt;br /&gt;
    &lt;br /&gt;
  ''Pbs_server startup at boot'':&lt;br /&gt;
    # qterm&lt;br /&gt;
    # cp contrib/systemd/pbs_server.service /usr/lib/systemd/system/&lt;br /&gt;
    # systemctl enable pbs_server.service&lt;br /&gt;
    # systemctl start pbs_server.service&lt;br /&gt;
    &lt;br /&gt;
  ''If using Torque's own built-in scheduler'':&lt;br /&gt;
    # pbs_sched&lt;br /&gt;
    -&amp;gt; If you want pbs_sched to run at boot, you need to configure it manually&lt;br /&gt;
&lt;br /&gt;
'''TORQUE MOM (for the nodes)'''&lt;br /&gt;
&lt;br /&gt;
    # make packages&lt;br /&gt;
    &lt;br /&gt;
    # scp torque-package-mom-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp torque-package-clients-linux-x86_64.sh &amp;lt;mom-node&amp;gt;:&lt;br /&gt;
    # scp contrib/systemd/pbs_mom.service &amp;lt;mom-node&amp;gt;:/usr/lib/systemd/system/&lt;br /&gt;
    &lt;br /&gt;
  ''On each node'':&lt;br /&gt;
    # ssh root@node&lt;br /&gt;
    # ./torque-package-mom-linux-x86_64.sh --install&lt;br /&gt;
    # ./torque-package-clients-linux-x86_64.sh --install&lt;br /&gt;
    # ldconfig&lt;br /&gt;
    # systemctl enable pbs_mom.service&lt;br /&gt;
    # systemctl start pbs_mom.service&lt;br /&gt;
    &lt;br /&gt;
    -&amp;gt; Set server in /var/spool/torque/mom_priv/config:&lt;br /&gt;
        $pbsserver headnode&lt;br /&gt;
    # service pbs_mom restart&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
	<entry>
		<id>https://wiki.if.ufrgs.br/index.php?title=Inform%C3%A1tica_TI&amp;diff=1472</id>
		<title>Informática TI</title>
		<link rel="alternate" type="text/html" href="https://wiki.if.ufrgs.br/index.php?title=Inform%C3%A1tica_TI&amp;diff=1472"/>
		<updated>2016-05-13T16:32:41Z</updated>

		<summary type="html">&lt;p&gt;Yescalianti: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__ &lt;br /&gt;
&amp;lt;TABLE width=&amp;quot;100%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;TR&amp;gt;&lt;br /&gt;
    &amp;lt;TD  width=&amp;quot;33%&amp;quot; bgcolor=&amp;quot;#ebebfb&amp;quot;&amp;gt;&amp;lt;H2&amp;gt;REDE&amp;lt;/H2&amp;gt; &lt;br /&gt;
      &amp;lt;UL&amp;gt;&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Topologia]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Manual_da_Bridge|Bridge]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Switches]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Wireless | Rede sem Fio]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Observatório Astronômico]] &lt;br /&gt;
        &amp;lt;LI&amp;gt; [[ssh_tunnel|Acesso a máquinas da rede interna]]&lt;br /&gt;
        &amp;lt;/LI&amp;gt;&lt;br /&gt;
        &amp;lt;/UL&amp;gt;&lt;br /&gt;
     &amp;lt;/TD&amp;gt;    &lt;br /&gt;
    &amp;lt;TD  width=&amp;quot;33%&amp;quot; bgcolor=&amp;quot;#dbffdb&amp;quot;&amp;gt;&amp;lt;H2&amp;gt;LINUX&amp;lt;/H2&amp;gt;      &lt;br /&gt;
      &amp;lt;UL&amp;gt;&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[How_To_Linux_geral|How To]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[wiki|Instalação de Wikis]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[install_backup|Instalação de Backup ]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[install_nagios|Instalação do Nagios]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[install_ldap| Instalação LDAP]]      &lt;br /&gt;
        &amp;lt;LI&amp;gt; [[install_log_server| Instalação Servidor de Logs]]                  &lt;br /&gt;
        &amp;lt;LI&amp;gt; [[install_samba|Instalação SAMBA]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[install_nfs|Instalação NFS]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[install_java|Instalação Java Debian 6]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[install_quota|Definição de quotas em Debian 7]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[install_torque|Instalação do Torque Cluster]]&lt;br /&gt;
&lt;br /&gt;
        &amp;lt;/LI&amp;gt;&lt;br /&gt;
        &amp;lt;/UL&amp;gt;&lt;br /&gt;
     &amp;lt;/TD&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;TD  width=&amp;quot;33%&amp;quot; bgcolor=&amp;quot;#fde6e1&amp;quot;&amp;gt;&amp;lt;H2&amp;gt;SERVIDORES&amp;lt;/H2&amp;gt; &lt;br /&gt;
      &amp;lt;UL&amp;gt;&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Servidores_e_Serviços| Serviços e Servidores]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Amanda|Servidor de Backup]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Clusters]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Nagios]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[samba_server|SAMBA]]&lt;br /&gt;
        &amp;lt;/LI&amp;gt;&lt;br /&gt;
        &amp;lt;/UL&amp;gt;&lt;br /&gt;
     &amp;lt;/TD&amp;gt;    &lt;br /&gt;
&amp;lt;/TR&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
 &amp;lt;TR&amp;gt;&lt;br /&gt;
    &amp;lt;TD  width=&amp;quot;33%&amp;quot; bgcolor=&amp;quot;#dbffdb&amp;quot;&amp;gt;&amp;lt;H2&amp;gt;WINDOWS&amp;lt;/H2&amp;gt; &lt;br /&gt;
      &amp;lt;UL&amp;gt;&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[virus|Remoção de Vírus]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Pgina|LDAP]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[windows_samba|SAMBA]]       &lt;br /&gt;
&lt;br /&gt;
        &amp;lt;/LI&amp;gt;&lt;br /&gt;
        &amp;lt;/UL&amp;gt;&lt;br /&gt;
     &amp;lt;/TD&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;TD  width=&amp;quot;33%&amp;quot; bgcolor=&amp;quot;#fde6e1&amp;quot;&amp;gt;&amp;lt;H2&amp;gt;Biblioteca&amp;lt;/H2&amp;gt; &lt;br /&gt;
      &amp;lt;UL&amp;gt;&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Ligar computadores automaticamente]]  &lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Mini impressoras biblioteca]]      &lt;br /&gt;
        &amp;lt;/LI&amp;gt;&lt;br /&gt;
        &amp;lt;/UL&amp;gt;&lt;br /&gt;
    &amp;lt;/TD&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;TD  width=&amp;quot;33%&amp;quot; bgcolor=&amp;quot;#ebebfb&amp;quot;&amp;gt;&amp;lt;H2&amp;gt;OUTROS ASSUNTOS&amp;lt;/H2&amp;gt; &lt;br /&gt;
      &amp;lt;UL&amp;gt;&lt;br /&gt;
        &amp;lt;LI&amp;gt;[[Testar videoconferência no SEAD]]&lt;br /&gt;
        &amp;lt;LI&amp;gt;[[Listas de discussão]]&lt;br /&gt;
        &amp;lt;LI&amp;gt;[[LDAP]]&lt;br /&gt;
        &amp;lt;LI&amp;gt;[[Restringir acesso a programas (Configuração para provas)]]&lt;br /&gt;
        &amp;lt;/LI&amp;gt;&lt;br /&gt;
        &amp;lt;/UL&amp;gt;&lt;br /&gt;
     &amp;lt;/TD&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/TR&amp;gt;&lt;br /&gt;
&amp;lt;TR&amp;gt;&lt;br /&gt;
    &amp;lt;TD  width=&amp;quot;33%&amp;quot; bgcolor=&amp;quot;#ebebfb&amp;quot;&amp;gt;&amp;lt;H2&amp;gt;PROJETOS&amp;lt;/H2&amp;gt; &lt;br /&gt;
      &amp;lt;UL&amp;gt;&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Novos_Projetos_e_Tarefas|2009/2]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[Novos_Projetos_e_Tarefas_-_2010-1|2010/1]]&lt;br /&gt;
        &amp;lt;LI&amp;gt; [[REUNI_2010|REUNI 2010]]&lt;br /&gt;
        &amp;lt;/LI&amp;gt;&lt;br /&gt;
        &amp;lt;/UL&amp;gt;&lt;br /&gt;
    &amp;lt;/TD&amp;gt;&lt;br /&gt;
&amp;lt;/TR&amp;gt;&lt;br /&gt;
&amp;lt;/TABLE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[ANTIGO]]&lt;/div&gt;</summary>
		<author><name>Yescalianti</name></author>
	</entry>
</feed>