Class

org.apache.mxnet.module

DataParallelExecutorGroup

Related Doc: package module

Permalink

class DataParallelExecutorGroup extends AnyRef

DataParallelExecutorGroup is a group of executors that lives on a group of devices. This is a helper class used to implement data parallelism. Each mini-batch will be split and run on the devices.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DataParallelExecutorGroup
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def backward(outGrads: Array[NDArray] = null): Unit

    Permalink

    Run backward on all devices.

    Run backward on all devices. A backward should be called after a call to the forward function. Backward cannot be called unless this.for_training is True.

    outGrads

    Gradient on the outputs to be propagated back. This parameter is only needed when bind is called on outputs that are not a loss function.

  6. def bindExec(dataShapes: IndexedSeq[DataDesc], labelShapes: Option[IndexedSeq[DataDesc]], sharedGroup: Option[DataParallelExecutorGroup], reshape: Boolean = false): Unit

    Permalink

    Bind executors on their respective devices.

    Bind executors on their respective devices.

    dataShapes

    DataDesc for input data.

    labelShapes

    DataDesc for input labels.

  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. var dataShapes: IndexedSeq[DataDesc]

    Permalink

    Should be a list of (name, shape) tuples, for the shapes of data.

    Should be a list of (name, shape) tuples, for the shapes of data. Note the order is important and should be the same as the order that the DataIter provide the data.

  9. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. def forward(dataBatch: DataBatch, isTrain: Option[Boolean] = None): Unit

    Permalink

    Split dataBatch according to workload and run forward on each devices.

    Split dataBatch according to workload and run forward on each devices.

    isTrain

    The hint for the backend, indicating whether we are during training phase. Default is None, then the value self.for_training will be used.

  13. def getBatchSize: Int

    Permalink
  14. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  15. def getInputGrads(): IndexedSeq[IndexedSeq[NDArray]]

    Permalink

    Get the gradients to the inputs, computed in the previous backward computation.

    Get the gradients to the inputs, computed in the previous backward computation.

    returns

    In the case when data-parallelism is used, the grads will be collected from multiple devices. The results will look like [ [grad1_dev1, grad1_dev2], [grad2_dev1, grad2_dev2] ], those NDArray might live on different devices.

  16. def getInputGradsMerged(): IndexedSeq[NDArray]

    Permalink

    Get the gradients to the inputs, computed in the previous backward computation.

    Get the gradients to the inputs, computed in the previous backward computation.

    returns

    In the case when data-parallelism is used, the grads will be merged from multiple devices, as they look like from a single executor. The results will look like [grad1, grad2]

  17. def getOutputShapes: IndexedSeq[(String, Shape)]

    Permalink
  18. def getOutputs(): IndexedSeq[IndexedSeq[NDArray]]

    Permalink

    Get outputs of the previous forward computation.

    Get outputs of the previous forward computation.

    returns

    In the case when data-parallelism is used, the outputs will be collected from multiple devices. The results will look like [ [out1_dev1, out1_dev2], [out2_dev1, out2_dev2] ], those NDArray might live on different devices.

  19. def getOutputsMerged(): IndexedSeq[NDArray]

    Permalink

    Get outputs of the previous forward computation.

    Get outputs of the previous forward computation.

    returns

    In the case when data-parallelism is used, the outputs will be merged from multiple devices, as they look like from a single executor. The results will look like [out1, out2]

  20. def getParams(argParams: Map[String, NDArray], auxParams: Map[String, NDArray]): Unit

    Permalink

    Copy data from each executor to arg_params and aux_params.

    Copy data from each executor to arg_params and aux_params.

    argParams

    target parameter arrays

    auxParams

    target aux arrays Note this function will inplace update the NDArrays in arg_params and aux_params.

  21. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  22. def installMonitor(monitor: Monitor): Unit

    Permalink
  23. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  24. var labelShapes: Option[IndexedSeq[DataDesc]]

    Permalink

    Should be a list of (name, shape) tuples, for the shapes of label.

    Should be a list of (name, shape) tuples, for the shapes of label. Note the order is important and should be the same as the order that the DataIter provide the label.

  25. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  26. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  27. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  28. def reshape(dataShapes: IndexedSeq[DataDesc], labelShapes: Option[IndexedSeq[DataDesc]]): Unit

    Permalink

    Reshape executors.

  29. def setParams(argParams: Map[String, NDArray], auxParams: Map[String, NDArray], allowExtra: Boolean = false): Unit

    Permalink

    Assign, i.e.

    Assign, i.e. copy parameters to all the executors.

    argParams

    A dictionary of name to NDArray parameter mapping.

    auxParams

    A dictionary of name to NDArray auxiliary variable mapping.

    allowExtra

    hether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when argParams or auxParams contain extra parameters that is not needed by the executor.

  30. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  31. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  32. def updateMetric(evalMetric: EvalMetric, labels: IndexedSeq[NDArray]): Unit

    Permalink

    Accumulate the performance according to eval_metric on all devices.

    Accumulate the performance according to eval_metric on all devices.

    evalMetric

    The metric used for evaluation.

    labels

    Typically comes from label of a DataBatch.

  33. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  34. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  35. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped