Converting Module API code to the Gluon API¶
Sometimes you find yourself in the situation where the model you want to use has been written using the symbolic Module API rather than the simpler, easier-to-debug, more flexible, imperative Gluon API. In this tutorial, we will give you a comprehensive guide for transforming Module code to Gluon code.
The different steps to take into consideration are:
I) Data loading
II) Model definition
III) Loss
IV) Training Loop
V) Exporting Models
VI) Loading Models for Inference
In the following section we will look at 1:1 mappings between the Module and the Gluon ways of training a neural network.
I - Data Loading¶
In this section we will be looking at the difference in loading data between Module and Gluon. Let’s first import a few Python modules.
from collections import namedtuple
import logging
logging.basicConfig(level=logging.INFO)
import random
import numpy as np
import mxnet as mx
from mxnet.gluon.data import ArrayDataset, DataLoader
from mxnet.gluon import nn
from mxnet import gluon
# parameters
batch_size = 5
dataset_length = 50
# random seeds
random.seed(1)
np.random.seed(1)
mx.random.seed(1)
Module¶
When using the Module API we use a DataIter
, in addition to the data itself, the DataIter
contains information about the name of the input symbols.
In the Module API, DataIter
s are responsible for both holding the data and iterating through it. Some DataIter
s support multi-threading like the ImageRecordIter
, while other don’t, such as the NDArrayIter
.
Let’s create some random data, following the same format as grayscale 28x28 images.
train_data = np.random.rand(dataset_length, 28,28).astype('float32')
train_label = np.random.randint(0, 10, (dataset_length,)).astype('float32')
We can now wraps this data into an ArrayIterator that will create batches of data using the first dimension of the provided array as the batch dimension.
data_iter = mx.io.NDArrayIter(data=train_data, label=train_label, batch_size=batch_size, shuffle=False, data_name='data', label_name='softmax_label')
for batch in data_iter:
print(batch.data[0].shape, batch.label[0])
break
(5, 28, 28)
[5. 0. 3. 4. 9.]
<NDArray 5 @cpu(0)>
Gluon¶
With Gluon, the preferred method is to use a DataLoader
that makes use of a Dataset
to asynchronously prefetch the data.
The Gluon API offers you the ability to efficiently fetch data and separate the concerns of loading versus holding data. The DataLoader role is to request certain indices of the dataset. The Dataset has ownership of the data.
The Dataset
data can be in or out of memory, and the DataLoader
role is to request certain indices of the dataset, in the main thread or through multi-processing (or multi-threaded) workers and batch the data together.
dataset = ArrayDataset(train_data, train_label)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False, num_workers=0)
for data, label in dataloader:
print(data.shape, label)
break
(5, 28, 28)
[5. 0. 3. 4. 9.]
<NDArray 5 @cpu(0)>
You can check the Dataset
and DataLoader
tutorials out. You can either rewrite your code in order to use one of the provided Dataset
class, like the ArrayDataset
or the ImageFolderDataset
II - Model Definition¶
Let’s look at the model definition from the MNIST Module Tutorial:
Module¶
For the Module API, you define the data flow by setting data
keyword argument of one layer to the next.
You then bind the symbolic model to a specific compute context and specify the symbol names for the data and the label.
# context
ctx = mx.cpu()
def get_module_network():
data = mx.sym.var('data')
data = mx.sym.flatten(data=data)
fc1 = mx.sym.FullyConnected(data=data, num_hidden=128)
act1 = mx.sym.Activation(data=fc1, act_type="relu")
fc2 = mx.sym.FullyConnected(data=act1, num_hidden = 64)
act2 = mx.sym.Activation(data=fc2, act_type="relu")
fc3 = mx.sym.FullyConnected(data=act2, num_hidden=10)
mlp = mx.sym.SoftmaxOutput(data=fc3, name='softmax')
return mlp
mlp = get_module_network()
# Bind model to Module
mlp_model = mx.mod.Module(symbol=mlp, context=ctx, data_names=['data'], label_names=['softmax_label'])
Gluon¶
In Gluon, for the equivalent model, you would create a Sequential
block, in that case a HybridSequential
block to allow for future hybridization since we are only using hybridizable blocks. The flow of the data will be automatically set from one layer to the next, since they are held in a Sequential
block.
Note that we don’t need named symbols for the input, and we show how the loss is handled in Gluon in the next section.
def get_gluon_network():
net = nn.HybridSequential()
with net.name_scope():
net.add(
nn.Flatten(),
nn.Dense(units=128, activation="relu"),
nn.Dense(units=64, activation="relu"),
nn.Dense(units=10)
)
return net
net = get_gluon_network()
III - Loss¶
The loss, that you are trying to minimize using an optimization algorithm like SGD, is defined differently in the Module API than in Gluon.
Module¶
In the module API, the loss is part of the network. It has usually a forward pass result, that is the inference value, and a backward pass that is the gradient of the output with respect to that particular loss.
For example, the sym.SoftmaxOutput is a softmax output in the forward pass and the gradient with respect to the cross-entropy loss in the backward pass.
# Softmax with cross entropy loss, directly part of the network
out = mx.sym.SoftmaxOutput(data=mlp, name='softmax')
Gluon¶
In Gluon, it is a lot more transparent. Losses, like the SoftmaxCrossEntropyLoss, are only computing the actual value of the loss. You then call .backward()
on the loss value to compute the gradient of the parameters with respect to that loss. At inference time, you simply call .softmax()
on your output to get the output of your network normalized according to the softmax function.
# We simply create a loss function we will use in our training loop
loss_fn = gluon.loss.SoftmaxCrossEntropyLoss()
In the next section we will show how you use this loss function in Gluon to generate the loss value in the main training loop.
IV - Training Loop¶
Module¶
The Module API provides a .fit()
function that takes care of fitting training data to your symbolic model. With Gluon, your execution flow controls the data flow, so you need to write your own loop. It might seems like it is more verbose, but you have a lot more control as to what is happening during the training.
With the .fit()
function, you control the metric reporting, checkpointing or weights initialization through a lot of different keyword arguments (check the docs). That is where you define the optimizer for example.
mlp_model.fit(data_iter, # train data
eval_data=data_iter, # validation data
optimizer='sgd', # use SGD to train
force_init=True,
force_rebind=True,
optimizer_params={'learning_rate':0.1}, # use fixed learning rate
eval_metric='acc', # report accuracy during training
num_epoch=5) # train for 5 full dataset passes
INFO:root:Epoch[4] Train-accuracy=0.070000
INFO:root:Epoch[4] Time cost=0.038
INFO:root:Epoch[4] Validation-accuracy=0.125000
Gluon¶
With Gluon, you do these operations directly in the training loop, and the optimizer is part of the Trainer
object that handles the weight updates of your parameters.
Notice the loss.backward()
we call before updating the weight as mentionned in the previous section
net.initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx) # Initialize network and trainer
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})
metric = mx.metric.Accuracy() # Pick a metric
for e in range(5): # start of epoch
for data, label in dataloader: # start of mini-batch
data = data.as_in_context(ctx)
label = label.as_in_context(ctx)
with mx.autograd.record():
output = net(data) # forward pass
loss = loss_fn(output, label) # get loss
loss.backward() # compute gradients
trainer.step(data.shape[0]) # update weights with SGD
metric.update(label, output) # update the metrics # end of mini-batch
name, acc = metric.get()
print('training metrics at epoch %d: %s=%f'%(e, name, acc))
metric.reset() # end of epoch
training metrics at epoch 3: accuracy=0.155000
training metrics at epoch 4: accuracy=0.145000
The Gluon training code is more verbose than the simple .fit
from Module. However that is also the main advantage, there is no black magic going on here, you have full control of your training loop. You can for example easily set breakpoints, modify a learning rate or print data during the training flow. This flexibility also makes easy to implement more complex use-case like gradient accumulation across batches.
V - Exporting Model¶
The ultimate purpose of training a model is to be able to export it and share it, whether it is for deployment or simply reproducibility purposes.
Module¶
With the Module API, you can save model using the .save_checkpoint()
and get a -symbol.json
and a .params
file that represent your network.
mlp_model.save_checkpoint('module-model', epoch=5)
# module-model-0005.params module-model-symbol.json
INFO:root:Saved checkpoint to "module-model-0005.params"
Gluon¶
With Gluon, network parameters are associated with a Block
, but the execution flow is controlled in python through the code in .forward()
function. Hence only hybridized networks can be exported with a -symbol.json
and .params
file using .export()
, non-hybridized models can only have their parameters exported using .save_parameters()
. Check this great tutorial to learn more: Saving and Loading Gluon Models.
Any models:
# save only the parameters
net.save_parameters('gluon-model.params')
# gluon-model.params
Hybridized models:
# save the parameters and the symbolic representation
net.hybridize()
net(mx.nd.ones((1,1,28,28), ctx))
net.export('gluon-model-hybrid', epoch=5)
# gluon-model-hybrid-symbol.json gluon-model-hybrid-0005.params
VI - Loading Model for Inference¶
Module¶
For inference, in the Module API, you need to first load the parameters and symbol, bind the symbol to a module and load the corresponding parameters. You can then pass a batch of data through that module and request the output of the network.
# Load the symbol and parameters
sym, arg_params, aux_params = mx.model.load_checkpoint('module-model', 5)
# Bind them in a module
mod = mx.mod.Module(symbol=sym, context=ctx, label_names=None)
mod.bind(for_training=False, data_shapes=[('data', (1,1,28,28))],
label_shapes=mod._label_shapes)
# Set the parameters
mod.set_params(arg_params, aux_params, allow_missing=True)
# Run the inference
Batch = namedtuple('Batch', ['data'])
mod.forward(Batch([mx.nd.ones((1,28,28))]))
prob = mod.get_outputs()[0].asnumpy()
print("Output probabilities: {}".format(prob))
Output probabilities: [[0.05537598 0.03889056 0.06126577 0.08879893 0.12371024 0.05759033 0.1378248 0.26134694 0.07905186 0.09614458]]
Gluon (Symbolic Model)¶
For the Gluon API, it is a lot simpler. You can just load a serialized model in a SymbolBlock
and run inference directly.
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
net = gluon.SymbolBlock.imports('module-model-symbol.json', ['data', 'softmax_label'], 'module-model-0005.params')
prob = net(mx.nd.ones((1,1,28,28)), mx.nd.ones(1)) # note the second argument here to account for the softmax_label symbol
print("Output probabilities: {}".format(prob.asnumpy()))
Output probabilities: [[0.05537598 0.03889056 0.06126577 0.08879893 0.12371024 0.05759033 0.1378248 0.26134694 0.07905186 0.09614458]]
Gluon (Imperative Model)¶
net = get_gluon_network()
net.load_parameters('gluon-model.params')
prob = net(mx.nd.ones((1,1,28,28))).softmax()
print("Output probabilities: {}".format(prob.asnumpy()))
Output probabilities: [[0.01298077 0.00173413 0.01661885 0.3362421 0.00536332 0.02099853 0.01413316 0.5528366 0.0133819 0.02571066]]
Conclusion¶
This tutorial lead you through the steps necessary to train a deep learning model and showed you the differences between the symbolic approach of the Module API and the imperative one of the Gluon API. If you need more help converting your Module API code to the Gluon API, reach out to the community on the discuss forum! You can also compare the scripts for training MNIST in Gluon and Module.