Skip to main content

Deep Learning Example

The INCD-Lisbon facility provide a few GPU, check the Comput Node Specs page.

Login on the submit node

Login on the cluster submition node, check the How to Access page for more information:

$ ssh -l <username> cirrus.ncg.ingrid.pt
[username@cirrus01 ~]$ _

Alternatives to run the Deep Learning example

We have three alternatives to run the Deep Learning example, or any other python based script.

The first way is to prepare a user python virtual environment on home directory and launch a batch job. We can also use the python virtual environment already prepared on the system and run the same example on it. Finally we can use an udocker container also available on the system.

The next three sections shows how to run the example for each method.

1) Run a Deep Learning job using a user python virtual environment

Prepare a python virtual environment

The default python version for CentOS 7.x is 2.7.5 which is not suitable for our example that rely on version 3.6 and up. So, we will create a python virtual environment and include needed components:

[username@cirrus01 ~]$ scl enable rh-python36 bash
[username@cirrus01 ~]$ python -m venv ~/pvenv
[username@cirrus01 ~]$ . ~/pvenv/bin/activate
[username@cirrus01 ~]$ pip install --upgrade pip
[username@cirrus01 ~]$ pip install --upgrade setuptools
[username@cirrus01 ~]$ pip install tensorflow-gpu
[username@cirrus01 ~]$ pip install keras

This opperation is performed only once, the python virtual environment will be reused all over your jobs.

Submit a Job to install TensorFlow and Keras on the python virtual environment

Since we do not have direct access to the GPU on the submit node then we have to submit a job, and only one, to install TensorFlow and Keras on our python virtual environment.

Create a submit script like as showed bellow and submit it:

[username@cirrus01 ~]$ vi pip_install.sh
#!/bin/bash
#SBATCH -p gpu
#SBATCH --gres=gpu

scl enable rh-python36 bash
. ~/pvenv/bin/activate
pip install tensorflow-gpu
pip install keras

[username@cirrus01 ~]$ sbatch pip_install.sh

Check the job output files after finished for correct completion, if something is wrong try to solve the problem or request support from helpdesk@incd.pt. You can also include in the job the full python virtual environment preparation as showed on the previous section if you like.

Check the python virtual environment

You may check if the python virtual environment is working as expected, for example:

[username@cirrus01 ~]$ python --version
Python 2.7.5
[username@cirrus01 ~]$ scl enable rh-python36 bash
[username@cirrus01 ~]$ python --version
Python 3.6.9
[username@cirrus01 ~]$ . ~/pvenv/bin/activate
[username@cirrus01 ~]$ pip list
Package              Version   
-------------------- ----------
...
Keras                2.3.1
Keras-Applications   1.0.8
Keras-Preprocessing  1.1.0
...
setuptools           44.0.0
...
tensorboard          2.0.2     
tensorflow-estimator 2.0.1     
tensorflow-gpu       2.0.0
Prepare your code

Choose a working directory for your code, for the purpose of this example we will run a deep learning python script named run.py, create also a submit script:

[username@cirrus01 ~]$ mkdir dl
[username@cirrus01 ~]$ cd dl
[username@cirrus01 dl]$ wget --no-check-certificate https://wiki.incd.pt/attachments/79 -O run.py

[username@cirrus01 dl]$ vi dl.sh
#!/bin/bash
#SBATCH -p gpu
#SBATCH --gres=gpu
#SBATCH --mem=64G

scl enable rh-python36 bash
. ~/pvenv/bin/activate
module load cuda-10.2
python run.py

[username@cirrus01 dl]$ ls -l
-rwxr-----+ 1 username usergroup  514 Jan  5 13:42 dl.sh
-rw-r-----+ 1 username usergroup 1378 Jan  5 15:42 run.py
Submit the Job
[username@cirrus01 dl]$ qbatch dl.sh
Submitted batch job 2027497

[username@cirrus01 dl]$ $ squeue 
  JOBID PARTITION     NAME     USER ST  TIME  NODES NODELIST(REASON) 
2027497       gpu    dl.sh username  R  0:01      1 hpc062 
Check Job results

On completion check results on standard output and error files:

[username@cirrus01 dl]$ ls -l
-rwxr-----+ 1 username usergroup   514 Jan  5 13:42 dl.sh
-rw-r-----+ 1 username usergroup  1378 Jan  5 15:42 run.py
-rw-r-----+ 1 username usergroup  4956 Jan  6 13:44 slurm-2027497.out

2) Run a Deep Learning job using the CVMFS python virtual environment

Instead of preparing an user python virtual environment we can use the environment already available on the system, named tensorflow/2.4.1, check it with the command

[username@cirrus01 ~]$ module avail
---------------- /cvmfs/sw.el7/modules/hpc ------------------
...
intel/2019.mkl          mpich-3.2            tensorflow/2.4.1
...

We will change the submit script dl.sh to the following:

[username@cirrus01 dl]$ vi dl.sh
#!/bin/bash
#SBATCH -p gpu
#SBATCH --gres=gpu
#SBATCH --mem=64G

module load tensorflow/2.4.1
python run.py

[username@cirrus01 dl]$ ls -l
-rwxr-----+ 1 username usergroup  145 Jan  5 13:42 dl.sh
-rw-r-----+ 1 username usergroup 1378 Jan  5 15:42 run.py

and procceed as in the previous example.

3) Run a Deep Learning job using a container available on CVMFS

Another alternative would be use a tensorflow container available on the CVMFS volume, this container is access by a wrapper, check more details on UDocker Containders. The environment is named udocker/tensorflow/gpu/2.4.1, check it with:

[username@cirrus01 ~]$ module avail
---------------......- /cvmfs/sw.el7/modules/hpc ----------------------
...
intel/hdf4/4.2.15   netcdf-fortran/4.5.2   udocker/tensorflow/gpu/2.4.1
...

In this case we will edit the submit script dl.sh to load a different environment and evoke the run.py script with the wrapper u_wrapper, like shown bellow:

[username@cirrus01 dl]$ vi dl.sh
#!/bin/bash
#SBATCH -p gpu
#SBATCH --gres=gpu
#SBATCH --mem=64G

module load udocker/tensorflow/gpu/2.4.1
u_wrapper python run.py

[username@cirrus01 dl]$ ls -l
-rwxr-----+ 1 username usergroup  157 Jan  5 13:42 dl.sh
-rw-r-----+ 1 username usergroup 1378 Jan  5 15:42 run.py

then do as before to submit the job and get the results.