Get Started
Features
Ecosystem
Docs & Tutorials
Trusted By
GitHub
Apache
Apache Software Foundation
Apache Incubator
License
Security
Privacy
Events
Sponsorship
Thanks
1.9.1
master
1.9.1
1.8.0
1.7.0
1.6.0
1.5.0
1.4.1
1.3.1
1.2.1
1.1.0
1.0.0
0.12.1
0.11.0
This project has retired. For details please refer to its
Attic page
.
Python Tutorials
navigate_next
Packages
navigate_next
Gluon
navigate_next
Losses
search
Quick search
code
Show Source
Table Of Contents
Python Tutorials
Getting Started
Crash Course
Manipulate data with
ndarray
Create a neural network
Automatic differentiation with
autograd
Train the neural network
Predict with a pre-trained model
Use GPUs
Moving to MXNet from Other Frameworks
PyTorch vs Apache MXNet
Gluon: from experiment to deployment
Logistic regression explained
MNIST
Packages
Automatic Differentiation
Gluon
Blocks
Custom Layers
Customer Layers (Beginners)
Hybridize
Initialization
Parameter and Block Naming
Layers and Blocks
Parameter Management
Saving and Loading Gluon Models
Activation Blocks
Data Tutorials
Image Augmentation
Spatial Augmentation
Color Augmentation
Composed Augmentations
Gluon
Dataset
s and
DataLoader
Using own data with included
Dataset
s
Using own data with custom
Dataset
s
Appendix: Upgrading from Module
DataIter
to Gluon
DataLoader
Image Tutorials
Image Augmentation
Image similarity search with InfoGAN
Handwritten Digit Recognition
Using pre-trained models in MXNet
Losses
Custom Loss Blocks
Kullback-Leibler (KL) Divergence
Loss functions
Text Tutorials
Google Neural Machine Translation
Machine Translation with Transformer
Training
MXNet Gluon Fit API
Trainer
Learning Rates
Learning Rate Finder
Learning Rate Schedules
Advanced Learning Rate Schedules
Normalization Blocks
KVStore
Distributed Key-Value Store
NDArray
An Intro: Manipulate Data the MXNet Way with NDArray
NDArray Operations
NDArray Contexts
Gotchas using NumPy in Apache MXNet
Tutorials
CSRNDArray - NDArray in Compressed Sparse Row Storage Format
RowSparseNDArray - NDArray for Sparse Gradient Updates
Train a Linear Regression Model with Sparse Symbols
Sparse NDArrays with Gluon
ONNX
Fine-tuning an ONNX model
Running inference on MXNet/Gluon from an ONNX model
Importing an ONNX model into MXNet
Export ONNX Models
Optimizers
Visualization
Visualize networks
Performance
Compression
Deploy with int-8
Float16
Gradient Compression
GluonCV with Quantized Models
Accelerated Backend Tools
Intel MKL-DNN
Quantize with MKL-DNN backend
Improving accuracy with Intel® Neural Compressor
Install MXNet with MKL-DNN
TensorRT
Optimizing Deep Learning Computation Graphs with TensorRT
Use TVM
Profiling MXNet Models
Using AMP: Automatic Mixed Precision
Deployment
Export
Exporting to ONNX format
Export Gluon CV Models
Save / Load Parameters
Inference
Deploy into C++
Image Classication using pretrained ResNet-50 model on Jetson module
Deploy into a Java or Scala Environment
Real-time Object Detection with MXNet On The Raspberry Pi
Run on AWS
Run on an EC2 Instance
Run on Amazon SageMaker
MXNet on the Cloud
Extend
Custom Layers
Custom Numpy Operators
New Operator Creation
New Operator in MXNet Backend
Python API
mxnet.ndarray
ndarray
ndarray.contrib
ndarray.image
ndarray.linalg
ndarray.op
ndarray.random
ndarray.register
ndarray.sparse
ndarray.utils
mxnet.gluon
gluon.Block
gluon.HybridBlock
gluon.SymbolBlock
gluon.Constant
gluon.Parameter
gluon.ParameterDict
gluon.Trainer
gluon.contrib
gluon.data
data.vision
vision.datasets
vision.transforms
gluon.loss
gluon.model_zoo.vision
gluon.nn
gluon.rnn
gluon.utils
mxnet.autograd
mxnet.initializer
mxnet.optimizer
mxnet.lr_scheduler
mxnet.metric
mxnet.kvstore
mxnet.symbol
symbol
symbol.contrib
symbol.image
symbol.linalg
symbol.op
symbol.random
symbol.register
symbol.sparse
mxnet.module
mxnet.contrib
contrib.autograd
contrib.io
contrib.ndarray
contrib.onnx
contrib.quantization
contrib.symbol
contrib.tensorboard
contrib.tensorrt
contrib.text
mxnet
mxnet.attribute
mxnet.base
mxnet.callback
mxnet.context
mxnet.engine
mxnet.executor
mxnet.executor_manager
mxnet.image
mxnet.io
mxnet.kvstore_server
mxnet.libinfo
mxnet.log
mxnet.model
mxnet.monitor
mxnet.name
mxnet.notebook
mxnet.operator
mxnet.profiler
mxnet.random
mxnet.recordio
mxnet.registry
mxnet.rtc
mxnet.runtime
mxnet.test_utils
mxnet.torch
mxnet.util
mxnet.visualization
Table Of Contents
Python Tutorials
Getting Started
Crash Course
Manipulate data with
ndarray
Create a neural network
Automatic differentiation with
autograd
Train the neural network
Predict with a pre-trained model
Use GPUs
Moving to MXNet from Other Frameworks
PyTorch vs Apache MXNet
Gluon: from experiment to deployment
Logistic regression explained
MNIST
Packages
Automatic Differentiation
Gluon
Blocks
Custom Layers
Customer Layers (Beginners)
Hybridize
Initialization
Parameter and Block Naming
Layers and Blocks
Parameter Management
Saving and Loading Gluon Models
Activation Blocks
Data Tutorials
Image Augmentation
Spatial Augmentation
Color Augmentation
Composed Augmentations
Gluon
Dataset
s and
DataLoader
Using own data with included
Dataset
s
Using own data with custom
Dataset
s
Appendix: Upgrading from Module
DataIter
to Gluon
DataLoader
Image Tutorials
Image Augmentation
Image similarity search with InfoGAN
Handwritten Digit Recognition
Using pre-trained models in MXNet
Losses
Custom Loss Blocks
Kullback-Leibler (KL) Divergence
Loss functions
Text Tutorials
Google Neural Machine Translation
Machine Translation with Transformer
Training
MXNet Gluon Fit API
Trainer
Learning Rates
Learning Rate Finder
Learning Rate Schedules
Advanced Learning Rate Schedules
Normalization Blocks
KVStore
Distributed Key-Value Store
NDArray
An Intro: Manipulate Data the MXNet Way with NDArray
NDArray Operations
NDArray Contexts
Gotchas using NumPy in Apache MXNet
Tutorials
CSRNDArray - NDArray in Compressed Sparse Row Storage Format
RowSparseNDArray - NDArray for Sparse Gradient Updates
Train a Linear Regression Model with Sparse Symbols
Sparse NDArrays with Gluon
ONNX
Fine-tuning an ONNX model
Running inference on MXNet/Gluon from an ONNX model
Importing an ONNX model into MXNet
Export ONNX Models
Optimizers
Visualization
Visualize networks
Performance
Compression
Deploy with int-8
Float16
Gradient Compression
GluonCV with Quantized Models
Accelerated Backend Tools
Intel MKL-DNN
Quantize with MKL-DNN backend
Improving accuracy with Intel® Neural Compressor
Install MXNet with MKL-DNN
TensorRT
Optimizing Deep Learning Computation Graphs with TensorRT
Use TVM
Profiling MXNet Models
Using AMP: Automatic Mixed Precision
Deployment
Export
Exporting to ONNX format
Export Gluon CV Models
Save / Load Parameters
Inference
Deploy into C++
Image Classication using pretrained ResNet-50 model on Jetson module
Deploy into a Java or Scala Environment
Real-time Object Detection with MXNet On The Raspberry Pi
Run on AWS
Run on an EC2 Instance
Run on Amazon SageMaker
MXNet on the Cloud
Extend
Custom Layers
Custom Numpy Operators
New Operator Creation
New Operator in MXNet Backend
Python API
mxnet.ndarray
ndarray
ndarray.contrib
ndarray.image
ndarray.linalg
ndarray.op
ndarray.random
ndarray.register
ndarray.sparse
ndarray.utils
mxnet.gluon
gluon.Block
gluon.HybridBlock
gluon.SymbolBlock
gluon.Constant
gluon.Parameter
gluon.ParameterDict
gluon.Trainer
gluon.contrib
gluon.data
data.vision
vision.datasets
vision.transforms
gluon.loss
gluon.model_zoo.vision
gluon.nn
gluon.rnn
gluon.utils
mxnet.autograd
mxnet.initializer
mxnet.optimizer
mxnet.lr_scheduler
mxnet.metric
mxnet.kvstore
mxnet.symbol
symbol
symbol.contrib
symbol.image
symbol.linalg
symbol.op
symbol.random
symbol.register
symbol.sparse
mxnet.module
mxnet.contrib
contrib.autograd
contrib.io
contrib.ndarray
contrib.onnx
contrib.quantization
contrib.symbol
contrib.tensorboard
contrib.tensorrt
contrib.text
mxnet
mxnet.attribute
mxnet.base
mxnet.callback
mxnet.context
mxnet.engine
mxnet.executor
mxnet.executor_manager
mxnet.image
mxnet.io
mxnet.kvstore_server
mxnet.libinfo
mxnet.log
mxnet.model
mxnet.monitor
mxnet.name
mxnet.notebook
mxnet.operator
mxnet.profiler
mxnet.random
mxnet.recordio
mxnet.registry
mxnet.rtc
mxnet.runtime
mxnet.test_utils
mxnet.torch
mxnet.util
mxnet.visualization
Losses
¶
Custom Loss Blocks
Kullback-Leibler (KL) Divergence
Loss functions
Did this page help you?
Yes
No
Thanks for your feedback!
Previous
Using pre-trained models in MXNet
Next
Custom Loss Blocks