Get Started
Blog
Features
Ecosystem
Docs & Tutorials
GitHub
1.6
master
1.7
1.6
1.5.0
1.4.1
1.3.1
1.2.1
1.1.0
1.0.0
0.12.1
0.11.0
This project has retired. For details please refer to its
Attic page
.
Python Tutorials
navigate_next
Packages
navigate_next
NDArray
navigate_next
Tutorials
search
Quick search
code
Show Source
Table Of Contents
Python Tutorials
Getting Started
Crash Course
Manipulate data with
ndarray
Create a neural network
Automatic differentiation with
autograd
Train the neural network
Predict with a pre-trained model
Use GPUs
Moving to MXNet from Other Frameworks
PyTorch vs Apache MXNet
Gluon: from experiment to deployment
Logistic regression explained
MNIST
Packages
Automatic Differentiation
Gluon
Blocks
Custom Layers
Customer Layers (Beginners)
Hybridize
Initialization
Parameter and Block Naming
Layers and Blocks
Parameter Management
Saving and Loading Gluon Models
Activation Blocks
Image Tutorials
Image Augmentation
Handwritten Digit Recognition
Using pre-trained models in MXNet
Losses
Custom Loss Blocks
Kullback-Leibler (KL) Divergence
Loss functions
Text Tutorials
Google Neural Machine Translation
Machine Translation with Transformer
Training
MXNet Gluon Fit API
Trainer
Learning Rates
Learning Rate Finder
Learning Rate Schedules
Advanced Learning Rate Schedules
Normalization Blocks
KVStore
Distributed Key-Value Store
NDArray
An Intro: Manipulate Data the MXNet Way with NDArray
NDArray Operations
NDArray Contexts
Gotchas using NumPy in Apache MXNet
Tutorials
CSRNDArray - NDArray in Compressed Sparse Row Storage Format
RowSparseNDArray - NDArray for Sparse Gradient Updates
Train a Linear Regression Model with Sparse Symbols
Sparse NDArrays with Gluon
ONNX
Fine-tuning an ONNX model
Running inference on MXNet/Gluon from an ONNX model
Importing an ONNX model into MXNet
Export ONNX Models
Optimizers
Visualization
Visualize networks
Performance
Compression
Deploy with int-8
Float16
Gradient Compression
GluonCV with Quantized Models
Accelerated Backend Tools
Intel MKL-DNN
Quantize with MKL-DNN backend
Install MXNet with MKL-DNN
TensorRT
Optimized GPU Inference
Use TVM
Profiling MXNet Models
Using AMP: Automatic Mixed Precision
Deployment
Export
Exporting to ONNX format
Export Gluon CV Models
Save / Load Parameters
Inference
Deploy into C++
Deploy into a Java or Scala Environment
Real-time Object Detection with MXNet On The Raspberry Pi
Run on AWS
Run on an EC2 Instance
Run on Amazon SageMaker
MXNet on the Cloud
Extend
Custom Layers
Custom Numpy Operators
New Operator Creation
New Operator in MXNet Backend
Python API
mxnet.ndarray
ndarray
ndarray.contrib
ndarray.image
ndarray.linalg
ndarray.op
ndarray.random
ndarray.register
ndarray.sparse
ndarray.utils
mxnet.gluon
gluon.Block
gluon.HybridBlock
gluon.SymbolBlock
gluon.Constant
gluon.Parameter
gluon.ParameterDict
gluon.Trainer
gluon.contrib
gluon.data
data.vision
vision.datasets
vision.transforms
gluon.loss
gluon.model_zoo.vision
gluon.nn
gluon.rnn
gluon.utils
mxnet.autograd
mxnet.initializer
mxnet.optimizer
mxnet.lr_scheduler
mxnet.metric
mxnet.kvstore
mxnet.symbol
symbol
symbol.contrib
symbol.image
symbol.linalg
symbol.op
symbol.random
symbol.register
symbol.sparse
mxnet.module
mxnet.contrib
contrib.autograd
contrib.io
contrib.ndarray
contrib.onnx
contrib.quantization
contrib.symbol
contrib.tensorboard
contrib.tensorrt
contrib.text
mxnet
mxnet.attribute
mxnet.base
mxnet.callback
mxnet.context
mxnet.engine
mxnet.executor
mxnet.executor_manager
mxnet.image
mxnet.io
mxnet.kvstore_server
mxnet.libinfo
mxnet.log
mxnet.model
mxnet.monitor
mxnet.name
mxnet.notebook
mxnet.operator
mxnet.profiler
mxnet.random
mxnet.recordio
mxnet.registry
mxnet.rtc
mxnet.test_utils
mxnet.torch
mxnet.util
mxnet.visualization
Table Of Contents
Python Tutorials
Getting Started
Crash Course
Manipulate data with
ndarray
Create a neural network
Automatic differentiation with
autograd
Train the neural network
Predict with a pre-trained model
Use GPUs
Moving to MXNet from Other Frameworks
PyTorch vs Apache MXNet
Gluon: from experiment to deployment
Logistic regression explained
MNIST
Packages
Automatic Differentiation
Gluon
Blocks
Custom Layers
Customer Layers (Beginners)
Hybridize
Initialization
Parameter and Block Naming
Layers and Blocks
Parameter Management
Saving and Loading Gluon Models
Activation Blocks
Image Tutorials
Image Augmentation
Handwritten Digit Recognition
Using pre-trained models in MXNet
Losses
Custom Loss Blocks
Kullback-Leibler (KL) Divergence
Loss functions
Text Tutorials
Google Neural Machine Translation
Machine Translation with Transformer
Training
MXNet Gluon Fit API
Trainer
Learning Rates
Learning Rate Finder
Learning Rate Schedules
Advanced Learning Rate Schedules
Normalization Blocks
KVStore
Distributed Key-Value Store
NDArray
An Intro: Manipulate Data the MXNet Way with NDArray
NDArray Operations
NDArray Contexts
Gotchas using NumPy in Apache MXNet
Tutorials
CSRNDArray - NDArray in Compressed Sparse Row Storage Format
RowSparseNDArray - NDArray for Sparse Gradient Updates
Train a Linear Regression Model with Sparse Symbols
Sparse NDArrays with Gluon
ONNX
Fine-tuning an ONNX model
Running inference on MXNet/Gluon from an ONNX model
Importing an ONNX model into MXNet
Export ONNX Models
Optimizers
Visualization
Visualize networks
Performance
Compression
Deploy with int-8
Float16
Gradient Compression
GluonCV with Quantized Models
Accelerated Backend Tools
Intel MKL-DNN
Quantize with MKL-DNN backend
Install MXNet with MKL-DNN
TensorRT
Optimized GPU Inference
Use TVM
Profiling MXNet Models
Using AMP: Automatic Mixed Precision
Deployment
Export
Exporting to ONNX format
Export Gluon CV Models
Save / Load Parameters
Inference
Deploy into C++
Deploy into a Java or Scala Environment
Real-time Object Detection with MXNet On The Raspberry Pi
Run on AWS
Run on an EC2 Instance
Run on Amazon SageMaker
MXNet on the Cloud
Extend
Custom Layers
Custom Numpy Operators
New Operator Creation
New Operator in MXNet Backend
Python API
mxnet.ndarray
ndarray
ndarray.contrib
ndarray.image
ndarray.linalg
ndarray.op
ndarray.random
ndarray.register
ndarray.sparse
ndarray.utils
mxnet.gluon
gluon.Block
gluon.HybridBlock
gluon.SymbolBlock
gluon.Constant
gluon.Parameter
gluon.ParameterDict
gluon.Trainer
gluon.contrib
gluon.data
data.vision
vision.datasets
vision.transforms
gluon.loss
gluon.model_zoo.vision
gluon.nn
gluon.rnn
gluon.utils
mxnet.autograd
mxnet.initializer
mxnet.optimizer
mxnet.lr_scheduler
mxnet.metric
mxnet.kvstore
mxnet.symbol
symbol
symbol.contrib
symbol.image
symbol.linalg
symbol.op
symbol.random
symbol.register
symbol.sparse
mxnet.module
mxnet.contrib
contrib.autograd
contrib.io
contrib.ndarray
contrib.onnx
contrib.quantization
contrib.symbol
contrib.tensorboard
contrib.tensorrt
contrib.text
mxnet
mxnet.attribute
mxnet.base
mxnet.callback
mxnet.context
mxnet.engine
mxnet.executor
mxnet.executor_manager
mxnet.image
mxnet.io
mxnet.kvstore_server
mxnet.libinfo
mxnet.log
mxnet.model
mxnet.monitor
mxnet.name
mxnet.notebook
mxnet.operator
mxnet.profiler
mxnet.random
mxnet.recordio
mxnet.registry
mxnet.rtc
mxnet.test_utils
mxnet.torch
mxnet.util
mxnet.visualization
Tutorials
¶
CSRNDArray - NDArray in Compressed Sparse Row Storage Format
Advantages of Compressed Sparse Row NDArray (CSRNDArray)
Prerequisites
Compressed Sparse Row Matrix
Example Matrix Compression
Array Creation
Inspecting Arrays
Storage Type Conversion
Copies
Indexing and Slicing
Sparse Operators and Storage Type Inference
Data Loading
Advanced Topics
GPU Support
Next
RowSparseNDArray - NDArray for Sparse Gradient Updates
Motivation
Prerequisites
Row Sparse Format
Array Creation
Function Overview
Setting Type
Inspecting Arrays
Storage Type Conversion
Copies
Retain Row Slices
Sparse Operators and Storage Type Inference
Sparse Optimizers
Advanced Topics
GPU Support
Next
Train a Linear Regression Model with Sparse Symbols
Prerequisites
Variables
Variable Storage Types
Bind with Sparse Arrays
Symbol Composition and Storage Type Inference
Basic Symbol Composition
Storage Type Inference
Storage Type Fallback
Inspecting Storage Types of the Symbol Graph
Training with Module APIs
Preparing the Data
Defining the Model
Training the model
Training the model with multiple machines or multiple devices
Sparse NDArrays with Gluon
Generating Sparse Data
Writing Sparse Data
Reading Sparse Data
Gluon Models for Sparse Data
Benchmark:
FullyConnected
Benchmark:
FullyConnectedSparse
Benchmark:
FullyConnectedSparse
with
grad_stype=row_sparse
Advanced: Sparse
weight
Conclusion
Recommended Next Steps
Previous
Gotchas using NumPy in Apache MXNet
Next
CSRNDArray - NDArray in Compressed Sparse Row Storage Format