Setup Transfer Learning Toolkit with Docker on Ubuntu?
When we talk about Computer vision products, most of them require the configuration of multiple things including the configuration of GPU and Operating System for the implementation of different problems. This sometimes causes issues for customers and even for the development team. Keeping these things in mind, Nvidia released Jetson Nano, which has its own GPU, CPU, and SDKs, that help to overcome problems like multiple framework development, and multiple configurations.
Jetson Nano is good in all perspectives, except memory, because it has limited memory of 2GB/4GB, which is shared between GPU and CPU. Due to this, training of custom Computer Vision models on Jetson Nano is not possible. So keeping these things in mind, Nvidia then released the Transfer learning toolkit, whose aim was to provide such a tool, with which users can able to train their custom models on some other system and, later on, use these trained models for inference on Jetson Nano with Deepstream-SDK.
In this article, we will focus solely on the installation of the Transfer Learning Toolkit with Docker.
Until Today [22 May 2022], the Transfer Learning Toolkit will only work fine with Ubuntu 18.04. Later on, Nvidia can provide support for the Transfer Learning Toolkit on Ubuntu 20.04. I have tested with Ubuntu 20.04 also, Transfer Learning Toolkit will install completely, but when you train custom models, training will throw multiple issues.
Also, make sure to install, the Nvidia-driver version: >=455. With Cuda 11. x installation this requirement will already be satisfied.
We will discuss the mentioned modules. All mentioned steps have been tested on Ubuntu 18.04 and CUDA 11. X.
- Installation of Docker-CE (for pulling Toolkit from Nvidia Server)
- Installation of Nvidia-container-toolkit (to get the pre-train model)
- Configure Nvidia-Container-Toolkit
- Pull and Install the Transfer Learning Toolkit with Docker
- Run Transfer learning Toolkit with Jupyter Notebook (for custom training)
Installation of Docker-CE with Docker:
Docker is an open-source platform that enables users to containerize their products. The main advantage of this is to develop and install products regardless of the environment, it configures the environment on its own.
For more details about Docker visit, What is Docker?
Now, we need to install Docker, You can use the mentioned commands below to install Docker and Docker-CE on your Ubuntu or you can install Docker & Docker-CE from the Docker Installation website.
#Uninstall the old version if already installed
sudo apt-get remove docker docker-engine docker.io containerd runc#update packages
sudo apt-get update#Install packages to work with apt instead of HTTPS
sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release#Add docker GPG-Key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg#Add command to set stable repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null#Update packages
sudo apt-get update#Install docker
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
Docker and Docker-CE are now installed properly. You can check the Docker version with the mentioned command. You may need to use sudo before the mentioned commands below if the “Permission Denied” error comes.
#command
docker version or docker -v#output
Version: 19.03.8
Installation of Nvidia Container Toolkit:
Nvidia-container-toolkit is a library that contains a set of tools for accessing and using the graphics devices of Nvidia within Linux containers. We need to install it for accessing graphic devices in our docker container. You can use the mentioned commands below to install it on your Linux system or you can install Nvidia-container-toolkit from the installation website.
#setup docker-ce
curl https://get.docker.com | sh \
&& sudo systemctl --now enable docker#set package repository and GPG-Key
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list#update packages
sudo apt-get update#install Nvidia-container-toolkit
sudo apt-get install -y nvidia-docker2#restart docker for pulling the transfer learning toolkit.
sudo systemctl restart docker
Nvidia-container-toolkit is now installed properly.
Configure Nvidia-Container-Toolkit:
We need to configure our Nvidia-Container-Toolkit so that we can pull the Transfer Learning Toolkit from the Nvidia Server. You can use mentioned steps below to configure the Nvidia container toolkit or you can configure the Nvidia Container Toolkit by following the steps at the link.
- Go to NGC, and sign in with your Nvidia account.
- On the Right-Top Corner, click on the user name, then select Setup.
- Click on Generate API Key, it will generate the API Key that you will use to pull the Transfer Learning Toolkit from the Nvidia Server.
- Go to the terminal, and run the mentioned command.
#login on the nvidia server with docker
docker login nvcr.io#Enter login credentials, that you generated with "Generate API Key"
a. Username: "$oauthtoken"
b. Password: "YOUR_NGC_API_KEY"
Nvidia-Container-Toolkit is properly configured.
Pull and Install Transfer Learning Toolkit with Docker:
We have properly installed and configured Docker, Docker-CE, and Nvidia Container Toolkit. It's time to pull the Transfer-Learning Toolkit from Nvidia Server. You can use the mentioned command below to pull the Transfer Learning Toolkit.
#pull transfer learning toolkit, it might take some time to download
docker pull nvcr.io/nvidia/tlt-streamanalytics:v3.0-dp-py3Once the above command downloads the Transfer Learning Toolkit completely, you can then use it with Jupyter Notebook.
Run Transfer learning Toolkit with Jupyter Notebook:
It’s time to run the Toolkit container in docker with Jupyter Notebook. You can use the mentioned commands below to run the already pulled Transfer Learning Toolkit container and launch Jupyter Notebook.
#get a list of docker containers
docker ps -a#container name : [above command, will show you the containers that are running, copy the name of the container of tlt-streamanalytics and use it in the below command]#run transfer learning toolkit container (may you need to use sudo)
docker run --runtime=nvidia -it -v /home/transferlearningtoolkit/tlt-experiments:/workspace/tlt-experiments -p 8787:8787 [container name] /bin/bash
#run jupyter notebook
jupyter notebook --ip 0.0.0.0 --port 8787 –allow-root
or
jupyter notebook --ip 0.0.0.0 --port 8787 --no-browser --allow-root
If the above commands, will run successfully, your Jupyter Notebook will open in Google Chrome or the default browser and you will be able to see a screen as mentioned below.
You will need to go to the examples folder, and you will be able to see different models, i.e. Detectnet-v2, YOLOv3, YOLOv4, SSD, etc. You can open the folder for a specific model to see its notebook. Follow the steps in the notebook, and you will be able to train a custom model.
That is all regarding “Setup Transfer Learning Toolkit with Docker on Ubuntu?”. You can try to train Transfer Learning Toolkit models on your data.
- Dataset creation from Videos: *Article Link
- Labeling Data for Custom Training: *Article Link
- Train YOLO-v5 on Custom Data: *Article Link
- Train YOLOR on Custom Data: *Article Link
- How to Prune and Sparse YOLOv5: *Article Link
- How Hyperparameters of YOLOv5 Works: *Article Link
About Authors
- Muhammad Rizwan Munawar is a highly experienced professional with more than three years of work experience in Computer Vision and Software Development. He is working as a Computer Vision Engineer in Teknoir, he has knowledge and expertise in different computer vision techniques including Object Detection, Object Tracking, Pose Estimation, Object Segmentation, Segment Anything, Python, and Software Development, Embedded Systems, Nvidia Embedded Devices. In his free time, he likes to play online games and enjoys his time sharing knowledge with the community through writing articles on Medium.
- LinkedIn Profile
Please feel free to comment below if you have any questions
