# Step 1: Manipulate data with NP on MXNet¶

This getting started exercise introduces the MXNet np package for ndarrays. These ndarrays extend the functionality of the common NumPy ndarrays, by adding support for gpu’s and by adding auto-differentiation with autograd. Now, many NumPy methods are available within MXNet; therefore, we will only briefly cover some of what is available.

## Import packages and create an array¶

To get started, run the following commands to import the np package together with the NumPy extensions package npx. Together, np with npx make up the NP on MXNet front end.

[1]:

import mxnet as mx
from mxnet import np, npx
npx.set_np()  # Activate NumPy-like mode.


In this step, create a 2D array (also called a matrix). The following code example creates a matrix with values from two sets of numbers: 1, 2, 3 and 4, 5, 6. This might also be referred to as a tuple of a tuple of integers.

[2]:

np.array(((1, 2, 3), (5, 6, 7)))

[03:51:52] /work/mxnet/src/storage/storage.cc:202: Using Pooled (Naive) StorageManager for CPU

[2]:

array([[1., 2., 3.],
[5., 6., 7.]])


You can also create a very simple matrix with the same shape (2 rows by 3 columns), but fill it with 1’s.

[3]:

x = np.full((2, 3), 1)
x

[3]:

array([[1, 1, 1],
[1, 1, 1]], dtype=int64)


Alternatively, you could use the following array creation routine.

[4]:

x = np.ones((2, 3))
x

[4]:

array([[1., 1., 1.],
[1., 1., 1.]])


You can create arrays whose values are sampled randomly. For example, sampling values uniformly between -1 and 1. The following code example creates the same shape, but with random sampling.

[5]:

y = np.random.uniform(-1, 1, (2, 3))
y

[5]:

array([[-0.9364808 ,  0.16642416,  0.52422976],
[ 0.739336  , -0.68318725,  0.50007904]])


As with NumPy, the dimensions of each ndarray are shown by accessing the .shape attribute. As the following code example shows, you can also query for size, which is equal to the product of the components of the shape. In addition, .dtype tells the data type of the stored values. As you notice when we generate random uniform values we generate float32 not float64 as normal NumPy arrays.

[6]:

(x.shape, x.size, x.dtype)

[6]:

((2, 3), 6, dtype('float32'))


You could also specifiy the datatype when you create your ndarray.

[7]:

x = np.full((2, 3), 1, dtype="int8")
x.dtype

[7]:

dtype('int8')


Versus the default of float32.

[8]:

x = np.full((2, 3), 1)
x.dtype

[8]:

dtype('int64')


When we multiply, by default we use the datatype with the most precision.

[9]:

x = x.astype("int8") + x.astype(int) + x.astype("float32")
x.dtype

[9]:

dtype('float32')


## Performing operations on an array¶

A ndarray supports a large number of standard mathematical operations. Here are some examples. You can perform element-wise multiplication by using the following code example.

[10]:

x * y

[10]:

array([[-2.8094425 ,  0.49927247,  1.5726893 ],
[ 2.218008  , -2.0495617 ,  1.5002371 ]])


You can perform exponentiation by using the following code example.

[11]:

np.exp(y)

[11]:

array([[0.39200494, 1.1810739 , 1.6891572 ],
[2.0945444 , 0.5050048 , 1.6488516 ]])


You can also find a matrix’s transpose to compute a proper matrix-matrix product by using the following code example.

[12]:

np.dot(x, y.T)

[12]:

array([[-0.7374809,  1.6686834],
[-0.7374809,  1.6686834]])


Alternatively, you could use the matrix multiplication function.

[13]:

np.matmul(x, y.T)

[13]:

array([[-0.7374809,  1.6686834],
[-0.7374809,  1.6686834]])


You can leverage built in operators, like summation.

[14]:

x.sum()

[14]:

array(18.)


You can also gather a mean value.

[15]:

x.mean()

[15]:

array(3.)


You can perform flatten and reshape just like you normally would in NumPy!

[16]:

x.flatten()

[16]:

array([3., 3., 3., 3., 3., 3.])

[17]:

x.reshape(6, 1)

[17]:

array([[3.],
[3.],
[3.],
[3.],
[3.],
[3.]])


## Indexing an array¶

The ndarrays support slicing in many ways you might want to access your data. The following code example shows how to read a particular element, which returns a 1D array with shape (1,).

[18]:

y[1, 2]

[18]:

array(0.50007904)


This example shows how to read the second and third columns from y.

[19]:

y[:, 1:3]

[19]:

array([[ 0.16642416,  0.52422976],
[-0.68318725,  0.50007904]])


This example shows how to write to a specific element.

[20]:

y[:, 1:3] = 2
y

[20]:

array([[-0.9364808,  2.       ,  2.       ],
[ 0.739336 ,  2.       ,  2.       ]])


You can perform multi-dimensional slicing, which is shown in the following code example.

[21]:

y[1:2, 0:2] = 4
y

[21]:

array([[-0.9364808,  2.       ,  2.       ],
[ 4.       ,  4.       ,  2.       ]])


## Converting between MXNet ndarrays and NumPy arrays¶

You can convert MXNet ndarrays to and from NumPy ndarrays, as shown in the following example. The converted arrays do not share memory.

[22]:

a = x.asnumpy()
(type(a), a)

[22]:

(numpy.ndarray,
array([[3., 3., 3.],
[3., 3., 3.]], dtype=float32))

[23]:

a = np.array(a)
(type(a), a)

[23]:

(mxnet.numpy.ndarray,
array([[3., 3., 3.],
[3., 3., 3.]]))


Additionally, you can move them to different GPU devices. You will dive more into this later, but here is an example for now.

[24]:

a.copyto(mx.gpu(0))

[03:51:54] /work/mxnet/src/storage/storage.cc:202: Using Pooled (Naive) StorageManager for GPU

[24]:

array([[3., 3., 3.],
[3., 3., 3.]], device=gpu(0))


## Next Steps¶

Ndarrays also have some additional features which make Deep Learning possible and efficient. Namely, differentiation, and being able to leverage GPU’s. Another important feature of ndarrays that we will discuss later is autograd. But first, we will abstract an additional level and talk about building Neural Network Layers Step 2: Create a neural network