The following tutorials will help you learn how to tune MXNet or use tools that will improve training and inference performance.


Improving Performance

How to get the best performance from MXNet.


How to profile MXNet models.

Tuning NumPy Operations

Gotchas using NumPy in MXNet.


Compression: float16compression/float16.html

How to use float16 in your model to boost training speed.

Gradient Compressioncompression/gradient_compression.html

How to use gradient compression to reduce communication bandwidth and increase speed.

Accelerated Backend


How to use NVIDIA’s TensorRT to boost inference performance.

Distributed Training

Distributed Training Using the KVStore API

How to use the KVStore API to use multiple GPUs when training a model.

Training with Multiple GPUs Using Model Parallelism

An overview of using multiple GPUs when training an LSTM.

Data Parallelism in MXNet

An overview of distributed training strategies.

MXNet with Horovod

A set of example scripts demonstrating MNIST and ImageNet training with Horovod as the distributed training backend.