Testing ResNet50 Performance (Nvidia Docker/Ubuntu)

This is the basic setup for tensorflow synthetic benchmarking on Ubuntu. This will all be run on NVIDIA GPUs only.

Install the cuda toolkit :

Install docker :

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
 sudo apt-get update
 sudo apt-get install docker-ce
sudo usermod -aG docker Username

Install nvidia docker -

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2

Get the tensorflow container pull command from :

https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow

I will look something like

docker pull nvcr.io/nvidia/tensorflow:20.10-tf2-py3

Run the container :

The tensorflow container site will have a version at the top, something like 20.10-tf3-py3 fill that into the following command line. Also, docker will let you mount a local directory in the container, for file sharing between the host and container. “local_dir:container_dir” is where you set that. When running on my desktop I set to /home/mark/mltest/:/workspace

docker run --gpus all -it --rm -v local_dir:container_dir nvcr.io/nvidia/tensorflow:xx.xx-tfx-py3

NOTE : the container may not run if your host driver doesn’t match the container requirements. It WILL tell you this during launch and specify what driver it requires. Exit the container and go download the correct driver and install it on the host machine.

This guide assumes you do not have the TF benchmarks in your local directory, in your container you can download them :

wget https://github.com/tensorflow/benchmarks/archive/master.zip
unzip master.zip
cd benchmarks-master/scripts/tf_cnn_benchmarks

Now run the benchmark…If you want to change the amount of GPUs, set it in num_gpus=x :

Without XLA (Running on AMD CPU) :
python tf_cnn_benchmarks.py --data_format=NCHW --batch_size=256 --num_batches=100 --model=resnet50 --optimizer=momentum --variable_update=replicated --all_reduce

With XLA (Running on Intel that supports XLA)
python tf_cnn_benchmarks.py --data_format=NCHW --batch_size=256 --num_batches=100 --model=resnet50 --optimizer=momentum --variable_update=replicated --all_reduce --xla_compile=True

3 Likes

This is defunct and shouldn’t be followed anymore. I’ll post a non-broken docker guide in the future.

1 Like

appreciate the follow up!

This will get you up and running… with the transitions to the new CUDA version, new kernels and Tensorflow there’s some jenky stuff happening right now for bare-metal installations.