Class/Object

io.smartdatalake.workflow.action

CustomFileAction

Related Docs: object CustomFileAction | package action

Permalink

case class CustomFileAction(id: ActionObjectId, inputId: DataObjectId, outputId: DataObjectId, transformer: CustomFileTransformerConfig, deleteDataAfterRead: Boolean = false, filesPerPartition: Int = 10, breakFileRefLineage: Boolean = false, executionMode: Option[ExecutionMode] = None, metricsFailCondition: Option[String] = None, metadata: Option[ActionMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends FileSubFeedAction with SmartDataLakeLogger with Product with Serializable

Action to transform files between two Hadoop Data Objects. The transformation is executed in distributed mode on the Spark executors. A custom file transformer must be given, which reads a file from Hadoop and writes it back to Hadoop.

inputId

inputs DataObject

outputId

output DataObject

transformer

a custom file transformer, which reads a file from HadoopFileDataObject and writes it back to another HadoopFileDataObject

deleteDataAfterRead

if the input files should be deleted after processing successfully

filesPerPartition

number of files per Spark partition

metricsFailCondition

optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.

Linear Supertypes
Serializable, Serializable, Product, Equals, FileSubFeedAction, Action, SmartDataLakeLogger, DAGNode, ParsableFromConfig[Action], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. CustomFileAction
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. FileSubFeedAction
  7. Action
  8. SmartDataLakeLogger
  9. DAGNode
  10. ParsableFromConfig
  11. SdlConfigObject
  12. AnyRef
  13. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new CustomFileAction(id: ActionObjectId, inputId: DataObjectId, outputId: DataObjectId, transformer: CustomFileTransformerConfig, deleteDataAfterRead: Boolean = false, filesPerPartition: Int = 10, breakFileRefLineage: Boolean = false, executionMode: Option[ExecutionMode] = None, metricsFailCondition: Option[String] = None, metadata: Option[ActionMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    inputId

    inputs DataObject

    outputId

    output DataObject

    transformer

    a custom file transformer, which reads a file from HadoopFileDataObject and writes it back to another HadoopFileDataObject

    deleteDataAfterRead

    if the input files should be deleted after processing successfully

    filesPerPartition

    number of files per Spark partition

    metricsFailCondition

    optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def addRuntimeEvent(phase: ExecutionPhase, state: RuntimeEventState, msg: Option[String] = None, results: Seq[SubFeed] = Seq()): Unit

    Permalink

    Adds an action event

    Adds an action event

    Definition Classes
    Action
  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. val breakFileRefLineage: Boolean

    Permalink

    Stop propagating input FileRefs through action and instead get new FileRefs from DataObject according to the SubFeed's partitionValue.

    Stop propagating input FileRefs through action and instead get new FileRefs from DataObject according to the SubFeed's partitionValue. This is needed to reprocess all files of a path/partition instead of the FileRef's passed from the previous Action.

    Definition Classes
    CustomFileActionFileSubFeedAction
  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. val deleteDataAfterRead: Boolean

    Permalink

    if the input files should be deleted after processing successfully

    if the input files should be deleted after processing successfully

    Definition Classes
    CustomFileActionFileSubFeedAction
  9. def enableRuntimeMetrics(): Unit

    Permalink

    Runtime metrics

    Runtime metrics

    Note: runtime metrics are disabled by default, because they are only collected when running Actions from an ActionDAG. This is not the case for Tests or other use cases. If enabled exceptions are thrown if metrics are not found.

    Definition Classes
    Action
  10. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  11. final def exec(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SubFeed]

    Permalink

    Action.exec implementation

    Action.exec implementation

    subFeeds

    SparkSubFeed's to be processed

    returns

    processed SparkSubFeed's

    Definition Classes
    FileSubFeedAction → Action
  12. def execSubFeed(subFeed: FileSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): FileSubFeed

    Permalink

    Executes Action for a given FileSubFeed

    Executes Action for a given FileSubFeed

    subFeed

    subFeed to be processed (referencing files to be read)

    returns

    processed subFeed (referencing files written by this action)

    Definition Classes
    CustomFileActionFileSubFeedAction
  13. val executionMode: Option[ExecutionMode]

    Permalink

    Execution mode if this Action is a start node of a DAG run

    Execution mode if this Action is a start node of a DAG run

    Definition Classes
    CustomFileActionFileSubFeedAction
  14. def factory: FromConfigFactory[Action]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    CustomFileAction → ParsableFromConfig
  15. val filesPerPartition: Int

    Permalink

    number of files per Spark partition

  16. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  17. def getAllLatestMetrics: Map[DataObjectId, Option[ActionMetrics]]

    Permalink
    Definition Classes
    Action
  18. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  19. def getFinalMetrics(dataObjectId: DataObjectId): Option[ActionMetrics]

    Permalink
    Definition Classes
    Action
  20. def getInputDataObject[T <: DataObject](id: DataObjectId)(implicit arg0: ClassTag[T], arg1: scala.reflect.api.JavaUniverse.TypeTag[T], registry: InstanceRegistry): T

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  21. def getLatestMetrics(dataObjectId: DataObjectId): Option[ActionMetrics]

    Permalink
    Definition Classes
    Action
  22. def getOutputDataObject[T <: DataObject](id: DataObjectId)(implicit arg0: ClassTag[T], arg1: scala.reflect.api.JavaUniverse.TypeTag[T], registry: InstanceRegistry): T

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  23. def getRuntimeInfo: Option[RuntimeInfo]

    Permalink

    get latest runtime information for this action

    get latest runtime information for this action

    Definition Classes
    Action
  24. val id: ActionObjectId

    Permalink

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    CustomFileAction → Action → SdlConfigObject
  25. final def init(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SubFeed]

    Permalink

    Action.init implementation

    Action.init implementation

    subFeeds

    SparkSubFeed's to be processed

    returns

    processed SparkSubFeed's

    Definition Classes
    FileSubFeedAction → Action
  26. def initSubFeed(subFeed: FileSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): FileSubFeed

    Permalink

    Initialize Action with a given FileSubFeed Note that this only checks the prerequisits to do the processing and simulates the output FileRef's that would be created.

    Initialize Action with a given FileSubFeed Note that this only checks the prerequisits to do the processing and simulates the output FileRef's that would be created.

    subFeed

    subFeed to be processed (referencing files to be read)

    returns

    processed subFeed (referencing files that would be written by this action)

    Definition Classes
    CustomFileActionFileSubFeedAction
  27. val input: HadoopFileDataObject

    Permalink

    Input FileRefDataObject which can CanCreateInputStream

    Input FileRefDataObject which can CanCreateInputStream

    Definition Classes
    CustomFileActionFileSubFeedAction
  28. val inputId: DataObjectId

    Permalink

    inputs DataObject

  29. val inputs: Seq[HadoopFileDataObject]

    Permalink

    Input DataObjects To be implemented by subclasses

    Input DataObjects To be implemented by subclasses

    Definition Classes
    CustomFileAction → Action
  30. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  31. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  32. val metadata: Option[ActionMetadata]

    Permalink

    Additional metadata for the Action

    Additional metadata for the Action

    Definition Classes
    CustomFileAction → Action
  33. val metricsFailCondition: Option[String]

    Permalink

    optional spark sql expression evaluated as where-clause against dataframe of metrics.

    optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.

    Definition Classes
    CustomFileAction → Action
  34. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  35. def nodeId: String

    Permalink

    provide an implementation of the DAG node id

    provide an implementation of the DAG node id

    Definition Classes
    Action → DAGNode
  36. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  37. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  38. def onRuntimeMetrics(dataObjectId: Option[DataObjectId], metrics: ActionMetrics): Unit

    Permalink
    Definition Classes
    Action
  39. val output: HadoopFileDataObject

    Permalink

    Output FileRefDataObject which can CanCreateOutputStream

    Output FileRefDataObject which can CanCreateOutputStream

    Definition Classes
    CustomFileActionFileSubFeedAction
  40. val outputId: DataObjectId

    Permalink

    output DataObject

  41. val outputs: Seq[HadoopFileDataObject]

    Permalink

    Output DataObjects To be implemented by subclasses

    Output DataObjects To be implemented by subclasses

    Definition Classes
    CustomFileAction → Action
  42. final def postExec(inputSubFeeds: Seq[SubFeed], outputSubFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Executes operations needed after executing an action.

    Executes operations needed after executing an action. In this step any phase on Input- or Output-DataObjects needed after the main task is executed, e.g. JdbcTableDataObjects postWriteSql or CopyActions deleteInputData.

    Definition Classes
    FileSubFeedAction → Action
  43. def postExecSubFeed(inputSubFeed: SubFeed, outputSubFeed: SubFeed)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink
    Definition Classes
    FileSubFeedAction
  44. def preExec(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Executes operations needed before executing an action.

    Executes operations needed before executing an action. In this step any phase on Input- or Output-DataObjects needed before the main task is executed, e.g. JdbcTableDataObjects preWriteSql

    Definition Classes
    Action
  45. def prepare(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Prepare DataObjects prerequisites.

    Prepare DataObjects prerequisites. In this step preconditions are prepared & tested: - connections can be created - needed structures exist, e.g Kafka topic or Jdbc table

    This runs during the "prepare" phase of the DAG.

    Definition Classes
    FileSubFeedAction → Action
  46. def recursiveInputs: Seq[FileRefDataObject with CanCreateInputStream]

    Permalink

    Recursive Inputs on FileSubFeeds are not supported so empty Seq is set.

    Recursive Inputs on FileSubFeeds are not supported so empty Seq is set.

    Definition Classes
    FileSubFeedAction → Action
  47. def reset(): Unit

    Permalink

    Resets the runtime state of this Action This is mainly used for testing

    Resets the runtime state of this Action This is mainly used for testing

    Definition Classes
    Action
  48. def setSparkJobMetadata(operation: Option[String] = None)(implicit session: SparkSession): Unit

    Permalink

    Sets the util job description for better traceability in the Spark UI

    Sets the util job description for better traceability in the Spark UI

    Note: This sets Spark local properties, which are propagated to the respective executor tasks. We rely on this to match metrics back to Actions and DataObjects. As writing to a DataObject on the Driver happens uninterrupted in the same exclusive thread, this is suitable.

    operation

    phase description (be short...)

    Definition Classes
    Action
  49. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  50. final def toString(): String

    Permalink

    This is displayed in ascii graph visualization

    This is displayed in ascii graph visualization

    Definition Classes
    Action → AnyRef → Any
  51. def toStringMedium: String

    Permalink
    Definition Classes
    Action
  52. def toStringShort: String

    Permalink
    Definition Classes
    Action
  53. val transformer: CustomFileTransformerConfig

    Permalink

    a custom file transformer, which reads a file from HadoopFileDataObject and writes it back to another HadoopFileDataObject

  54. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  55. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  56. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from FileSubFeedAction

Inherited from Action

Inherited from SmartDataLakeLogger

Inherited from DAGNode

Inherited from ParsableFromConfig[Action]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped