Class

org.apache.mxnet.module

BucketingModule

Related Doc: package module

Permalink

class BucketingModule extends BaseModule

This module helps to deal efficiently with varying-length inputs.

Linear Supertypes
BaseModule, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. BucketingModule
  2. BaseModule
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new BucketingModule(symGen: (AnyRef) ⇒ (Symbol, IndexedSeq[String], IndexedSeq[String]), defaultBucketKey: AnyRef, contexts: Array[Context] = Context.cpu(), workLoadList: Option[IndexedSeq[Float]] = None, fixedParamNames: Option[Set[String]] = None)

    Permalink

    symGen

    A function when called with a bucket key, returns a triple (symbol, dataNames, labelNames).

    defaultBucketKey

    The key for the default bucket.

    contexts

    Default is cpu().

    workLoadList

    Default None, indicating uniform workload.

    fixedParamNames

    Default None, indicating no network parameters are fixed.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def backward(outGrads: Array[NDArray] = null): Unit

    Permalink

    Backward computation.

    Backward computation.

    outGrads

    Gradient on the outputs to be propagated back. This parameter is only needed when bind is called on outputs that are not a loss function.

    Definition Classes
    BucketingModuleBaseModule
  6. def bind(dataShapes: IndexedSeq[DataDesc], labelShapes: Option[IndexedSeq[DataDesc]] = None, forTraining: Boolean = true, inputsNeedGrad: Boolean = false, forceRebind: Boolean = false, sharedModule: Option[BaseModule] = None, gradReq: String = "write"): Unit

    Permalink

    Bind the symbols to construct executors.

    Bind the symbols to construct executors. This is necessary before one can perform computation with the module.

    dataShapes

    Typically is dataIter.provideData.

    labelShapes

    Typically is dataIter.provideLabel.

    forTraining

    Default is true. Whether the executors should be bind for training.

    inputsNeedGrad

    Default is false. Whether the gradients to the input data need to be computed. Typically this is not needed. But this might be needed when implementing composition of modules.

    forceRebind

    Default is false. This function does nothing if the executors are already binded. But with this true, the executors will be forced to rebind.

    sharedModule

    Default is None. This is used in bucketing. When not None, the shared module essentially corresponds to a different bucket -- a module with different symbol but with the same sets of parameters (e.g. unrolled RNNs with different lengths).

    gradReq

    Requirement for gradient accumulation (globally). Can be 'write', 'add', or 'null' (default to 'write').

    Definition Classes
    BucketingModuleBaseModule
  7. def bind(forTraining: Boolean, inputsNeedGrad: Boolean, forceRebind: Boolean, dataShape: DataDesc*): Unit

    Permalink

    Bind the symbols to construct executors.

    Bind the symbols to construct executors. This is necessary before one can perform computation with the module.

    forTraining

    Default is True. Whether the executors should be bind for training.

    inputsNeedGrad

    Default is False. Whether the gradients to the input data need to be computed. Typically this is not needed. But this might be needed when implementing composition of modules.

    forceRebind

    Default is False. This function does nothing if the executors are already binded. But with this True, the executors will be forced to rebind.

    dataShape

    Typically is DataIter.provideData.

    Definition Classes
    BaseModule
    Annotations
    @varargs()
  8. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. def dataNames: IndexedSeq[String]

    Permalink
    Definition Classes
    BucketingModuleBaseModule
  10. def dataShapes: IndexedSeq[DataDesc]

    Permalink
    Definition Classes
    BucketingModuleBaseModule
  11. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  12. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  13. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  14. def fit(trainData: DataIter, evalData: Option[DataIter] = None, numEpoch: Int = 1, fitParams: FitParams = new FitParams): Unit

    Permalink

    Train the module parameters.

    Train the module parameters.

    evalData

    If not None, will be used as validation set and evaluate the performance after each epoch.

    numEpoch

    Number of epochs to run training.

    fitParams

    Extra parameters for training.

    Definition Classes
    BaseModule
  15. def forward(dataBatch: DataBatch, isTrain: Option[Boolean] = None): Unit

    Permalink

    Forward computation.

    Forward computation.

    dataBatch

    input data

    isTrain

    Default is None, which means is_train takes the value of for_training.

    Definition Classes
    BucketingModuleBaseModule
  16. def forward(dataBatch: DataBatch, isTrain: Boolean): Unit

    Permalink

    Forward computation.

    Forward computation.

    dataBatch

    a batch of data.

    isTrain

    Whether it is for training or not.

    Definition Classes
    BaseModule
  17. def forwardBackward(dataBatch: DataBatch): Unit

    Permalink
    Definition Classes
    BaseModule
  18. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  19. def getInputGrads(): IndexedSeq[IndexedSeq[NDArray]]

    Permalink

    Get the gradients to the inputs, computed in the previous backward computation.

    Get the gradients to the inputs, computed in the previous backward computation.

    returns

    In the case when data-parallelism is used, the grads will be collected from multiple devices. The results will look like [ [grad1_dev1, grad1_dev2], [grad2_dev1, grad2_dev2] ], those NDArray might live on different devices.

    Definition Classes
    BucketingModuleBaseModule
  20. def getInputGradsMerged(): IndexedSeq[NDArray]

    Permalink

    Get the gradients to the inputs, computed in the previous backward computation.

    Get the gradients to the inputs, computed in the previous backward computation.

    returns

    In the case when data-parallelism is used, the grads will be merged from multiple devices, as they look like from a single executor. The results will look like [grad1, grad2]

    Definition Classes
    BucketingModuleBaseModule
  21. def getOutputs(): IndexedSeq[IndexedSeq[NDArray]]

    Permalink

    Get outputs of the previous forward computation.

    Get outputs of the previous forward computation.

    returns

    In the case when data-parallelism is used, the outputs will be collected from multiple devices. The results will look like [ [out1_dev1, out1_dev2], [out2_dev1, out2_dev2] ], those NDArray might live on different devices.

    Definition Classes
    BucketingModuleBaseModule
  22. def getOutputsMerged(): IndexedSeq[NDArray]

    Permalink

    Get outputs of the previous forward computation.

    Get outputs of the previous forward computation.

    returns

    In the case when data-parallelism is used, the outputs will be merged from multiple devices, as they look like from a single executor. The results will look like [out1, out2]

    Definition Classes
    BucketingModuleBaseModule
  23. def getParams: (Map[String, NDArray], Map[String, NDArray])

    Permalink

    Get current parameters.

    Get current parameters. (arg_params, aux_params), each a dictionary of name to parameters (in NDArray) mapping.

    returns

    (argParams, auxParams), a pair of dictionary of name to value mapping.

    Definition Classes
    BucketingModuleBaseModule
  24. def getSymbol: Symbol

    Permalink
    Definition Classes
    BucketingModuleBaseModule
  25. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  26. def initOptimizer(kvstore: String = "local", optimizer: Optimizer = new SGD(), resetOptimizer: Boolean = true, forceInit: Boolean = false): Unit

    Permalink

    Install and initialize optimizers.

    Install and initialize optimizers.

    resetOptimizer

    Default True, indicating whether we should set rescaleGrad & idx2name for optimizer according to executorGroup

    forceInit

    Default False, indicating whether we should force re-initializing the optimizer in the case an optimizer is already installed.

    Definition Classes
    BucketingModuleBaseModule
  27. def initParams(initializer: Initializer = new Uniform(0.01f), argParams: Map[String, NDArray] = null, auxParams: Map[String, NDArray] = null, allowMissing: Boolean = false, forceInit: Boolean = false, allowExtra: Boolean = false): Unit

    Permalink

    Initialize the parameters and auxiliary states.

    Initialize the parameters and auxiliary states.

    initializer

    Called to initialize parameters if needed.

    argParams

    If not None, should be a dictionary of existing arg_params. Initialization will be copied from that.

    auxParams

    If not None, should be a dictionary of existing aux_params. Initialization will be copied from that.

    allowMissing

    If true, params could contain missing values, and the initializer will be called to fill those missing params.

    forceInit

    If true, will force re-initialize even if already initialized.

    allowExtra

    Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when argParams or auxParams contain extra parameters that is not needed by the executor.

    Definition Classes
    BucketingModuleBaseModule
  28. def installMonitor(monitor: Monitor): Unit

    Permalink
    Definition Classes
    BucketingModuleBaseModule
  29. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  30. def labelShapes: IndexedSeq[DataDesc]

    Permalink

    A list of (name, shape) pairs specifying the label inputs to this module.

    A list of (name, shape) pairs specifying the label inputs to this module. If this module does not accept labels -- either it is a module without loss function, or it is not binded for training, then this should return an empty list [].

    Definition Classes
    BucketingModuleBaseModule
  31. def loadParams(fname: String): Unit

    Permalink

    Load model parameters from file.

    Load model parameters from file.

    fname

    Path to input param file.

    Definition Classes
    BaseModule
    Annotations
    @throws( classOf[IOException] )
    Exceptions thrown

    IOException if param file is invalid

  32. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  33. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  34. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  35. def outputNames: IndexedSeq[String]

    Permalink
    Definition Classes
    BucketingModuleBaseModule
  36. def outputShapes: IndexedSeq[(String, Shape)]

    Permalink
    Definition Classes
    BucketingModuleBaseModule
  37. def predict(evalData: DataIter, numBatch: Int = 1, reset: Boolean = true): IndexedSeq[NDArray]

    Permalink

    Run prediction and collect the outputs.

    Run prediction and collect the outputs.

    evalData

    dataIter to do the Inference

    numBatch

    Default is -1, indicating running all the batches in the data iterator.

    reset

    Default is True, indicating whether we should reset the data iter before start doing prediction.

    returns

    The return value will be a list [out1, out2, out3]. The concatenation process will be like

    outputBatches = [
      [a1, a2, a3], // batch a
      [b1, b2, b3]  // batch b
    ]
    result = [
      NDArray, // [a1, b1]
      NDArray, // [a2, b2]
      NDArray, // [a3, b3]
    ]

    Where each element is concatenation of the outputs for all the mini-batches.

    Definition Classes
    BaseModule
  38. def predict(batch: DataBatch): IndexedSeq[NDArray]

    Permalink
    Definition Classes
    BaseModule
  39. def predictEveryBatch(evalData: DataIter, numBatch: Int = 1, reset: Boolean = true): IndexedSeq[IndexedSeq[NDArray]]

    Permalink

    Run prediction and collect the outputs.

    Run prediction and collect the outputs.

    numBatch

    Default is -1, indicating running all the batches in the data iterator.

    reset

    Default is True, indicating whether we should reset the data iter before start doing prediction.

    returns

    The return value will be a nested list like [ [out1_batch1, out2_batch1, ...], [out1_batch2, out2_batch2, ...] ] This mode is useful because in some cases (e.g. bucketing), the module does not necessarily produce the same number of outputs.

    Definition Classes
    BaseModule
  40. def prepare(dataBatch: DataBatch): Unit

    Permalink

    Prepares a data batch for forward.

    Prepares a data batch for forward.

    dataBatch

    input data

  41. def saveParams(fname: String): Unit

    Permalink

    Save model parameters to file.

    Save model parameters to file.

    fname

    Path to output param file.

    Definition Classes
    BaseModule
  42. def score(evalData: DataIter, evalMetric: EvalMetric, numBatch: Int = Integer.MAX_VALUE, batchEndCallback: Option[BatchEndCallback] = None, scoreEndCallback: Option[BatchEndCallback] = None, reset: Boolean = true, epoch: Int = 0): EvalMetric

    Permalink

    Run prediction on eval_data and evaluate the performance according to eval_metric.

    Run prediction on eval_data and evaluate the performance according to eval_metric.

    evalData

    : DataIter

    evalMetric

    : EvalMetric

    numBatch

    Number of batches to run. Default is Integer.MAX_VALUE, indicating run until the DataIter finishes.

    batchEndCallback

    Could also be a list of functions.

    reset

    Default True, indicating whether we should reset eval_data before starting evaluating.

    epoch

    Default 0. For compatibility, this will be passed to callbacks (if any). During training, this will correspond to the training epoch number.

    Definition Classes
    BaseModule
  43. def setParams(argParams: Map[String, NDArray], auxParams: Map[String, NDArray], allowMissing: Boolean = false, forceInit: Boolean = true, allowExtra: Boolean = false): Unit

    Permalink

    Assign parameter and aux state values.

    Assign parameter and aux state values.

    argParams

    Dictionary of name to value (NDArray) mapping.

    auxParams

    Dictionary of name to value (NDArray) mapping.

    allowMissing

    If true, params could contain missing values, and the initializer will be called to fill those missing params.

    forceInit

    If true, will force re-initialize even if already initialized.

    allowExtra

    Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when argParams or auxParams contain extra parameters that is not needed by the executor.

    Definition Classes
    BucketingModuleBaseModule
  44. def switchBucket(bucketKey: AnyRef, dataShapes: IndexedSeq[DataDesc], labelShapes: Option[IndexedSeq[DataDesc]] = None): Unit

    Permalink

    Switches to a different bucket.

    Switches to a different bucket. This will change this._currModule.

    bucketKey

    The key of the target bucket.

    dataShapes

    Typically is dataIter.provideData.

    labelShapes

    Typically is dataIter.provideLabel.

  45. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  46. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  47. def update(): Unit

    Permalink
    Definition Classes
    BucketingModuleBaseModule
  48. def updateMetric(evalMetric: EvalMetric, labels: IndexedSeq[NDArray]): Unit

    Permalink

    Evaluate and accumulate evaluation metric on outputs of the last forward computation.

    Evaluate and accumulate evaluation metric on outputs of the last forward computation.

    Definition Classes
    BucketingModuleBaseModule
  49. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  50. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  51. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from BaseModule

Inherited from AnyRef

Inherited from Any

Ungrouped