Class/Object

io.smartdatalake.workflow.dataobject

HiveTableDataObject

Related Docs: object HiveTableDataObject | package dataobject

Permalink

case class HiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SaveMode = SaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends TableDataObject with CanWriteDataFrame with CanHandlePartitions with SmartDataLakeLogger with Product with Serializable

DataObject of type Hive. Provides details to access Hive tables to an Action

id

unique name of this data object

path

hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued

partitions

partition columns for this data object

analyzeTableAfterWrite

enable compute statistics after writing data (default=false)

dateColumnType

type of date column

schemaMin

An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

table

hive table to be written by this output

numInitialHdfsPartitions

number of files created when writing into an empty table (otherwise the number will be derived from the existing data)

saveMode

spark SaveMode to use when writing files, default is "overwrite"

acl

override connections permissions for files created tables hadoop directory with this connection

connectionId

optional id of io.smartdatalake.workflow.connection.HiveTableConnection

metadata

meta data

Linear Supertypes
Serializable, Serializable, Product, Equals, CanHandlePartitions, CanWriteDataFrame, TableDataObject, SchemaValidation, CanCreateDataFrame, DataObject, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. HiveTableDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. CanHandlePartitions
  7. CanWriteDataFrame
  8. TableDataObject
  9. SchemaValidation
  10. CanCreateDataFrame
  11. DataObject
  12. SmartDataLakeLogger
  13. ParsableFromConfig
  14. SdlConfigObject
  15. AnyRef
  16. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new HiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SaveMode = SaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    id

    unique name of this data object

    path

    hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued

    partitions

    partition columns for this data object

    analyzeTableAfterWrite

    enable compute statistics after writing data (default=false)

    dateColumnType

    type of date column

    schemaMin

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    table

    hive table to be written by this output

    numInitialHdfsPartitions

    number of files created when writing into an empty table (otherwise the number will be derived from the existing data)

    saveMode

    spark SaveMode to use when writing files, default is "overwrite"

    acl

    override connections permissions for files created tables hadoop directory with this connection

    connectionId

    optional id of io.smartdatalake.workflow.connection.HiveTableConnection

    metadata

    meta data

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val acl: Option[AclDef]

    Permalink

    override connections permissions for files created tables hadoop directory with this connection

  5. val analyzeTableAfterWrite: Boolean

    Permalink

    enable compute statistics after writing data (default=false)

  6. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. val connectionId: Option[ConnectionId]

    Permalink

    optional id of io.smartdatalake.workflow.connection.HiveTableConnection

  9. def createEmptyPartition(partitionValues: PartitionValues)(implicit session: SparkSession): Unit

    Permalink

    create empty partition

    create empty partition

    Definition Classes
    HiveTableDataObject → CanHandlePartitions
  10. final def createMissingPartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Create empty partitions for partition values not yet existing

    Create empty partitions for partition values not yet existing

    Definition Classes
    CanHandlePartitions
  11. def createReadSchema(writeSchema: StructType)(implicit session: SparkSession): StructType

    Permalink

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    CanCreateDataFrame
  12. val dateColumnType: DateColumnType

    Permalink

    type of date column

  13. def deletePartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Delete given partitions.

    Delete given partitions. This is used to cleanup partitions after they are processed.

    Definition Classes
    CanHandlePartitions
  14. def dropTable(implicit session: SparkSession): Unit

    Permalink
    Definition Classes
    HiveTableDataObject → TableDataObject
  15. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  16. def factory: FromConfigFactory[DataObject]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    HiveTableDataObject → ParsableFromConfig
  17. def filesystem(implicit session: SparkSession): FileSystem

    Permalink
  18. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  19. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  20. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
  21. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink
    Attributes
    protected
    Definition Classes
    DataObject
  22. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    HiveTableDataObject → CanCreateDataFrame
  23. def getPKduplicates(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  24. def getPKnulls(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  25. def getPKviolators(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  26. def hadoopPath(implicit session: SparkSession): Path

    Permalink
  27. val id: DataObjectId

    Permalink

    unique name of this data object

    unique name of this data object

    Definition Classes
    HiveTableDataObject → DataObject → SdlConfigObject
  28. def init(partitionValues: Seq[PartitionValues] = Seq())(implicit session: SparkSession): Unit

    Permalink

    Initialize callback before writing data out to disk/sinks.

    Initialize callback before writing data out to disk/sinks.

    Definition Classes
    CanWriteDataFrame
  29. implicit val instanceRegistry: InstanceRegistry

    Permalink
  30. def isDbExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    HiveTableDataObject → TableDataObject
  31. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  32. def isPKcandidateKey(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    TableDataObject
  33. def isTableExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    HiveTableDataObject → TableDataObject
  34. def listPartitions(implicit session: SparkSession): Seq[PartitionValues]

    Permalink

    list hive table partitions

    list hive table partitions

    Definition Classes
    HiveTableDataObject → CanHandlePartitions
  35. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  36. val metadata: Option[DataObjectMetadata]

    Permalink

    meta data

    meta data

    Definition Classes
    HiveTableDataObject → DataObject
  37. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  38. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  39. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  40. val numInitialHdfsPartitions: Int

    Permalink

    number of files created when writing into an empty table (otherwise the number will be derived from the existing data)

  41. val partitions: Seq[String]

    Permalink

    partition columns for this data object

    partition columns for this data object

    Definition Classes
    HiveTableDataObject → CanHandlePartitions
  42. val path: Option[String]

    Permalink

    hadoop directory for this table.

    hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued

  43. def postRead(implicit session: SparkSession): Unit

    Permalink

    Runs operations after reading from DataObject

    Runs operations after reading from DataObject

    Definition Classes
    DataObject
  44. def postWrite(implicit session: SparkSession): Unit

    Permalink

    Runs operations after writing to DataObject

    Runs operations after writing to DataObject

    Definition Classes
    DataObject
  45. def preRead(implicit session: SparkSession): Unit

    Permalink

    Runs operations before reading from DataObject

    Runs operations before reading from DataObject

    Definition Classes
    DataObject
  46. def preWrite(implicit session: SparkSession): Unit

    Permalink

    Runs operations before writing to DataObject

    Runs operations before writing to DataObject

    Definition Classes
    HiveTableDataObject → DataObject
  47. def prepare(implicit session: SparkSession): Unit

    Permalink

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    HiveTableDataObject → DataObject
  48. val saveMode: SaveMode

    Permalink

    spark SaveMode to use when writing files, default is "overwrite"

  49. val schemaMin: Option[StructType]

    Permalink

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    Definition Classes
    HiveTableDataObject → SchemaValidation
  50. def streamingOptions: Map[String, String]

    Permalink
    Definition Classes
    CanWriteDataFrame
  51. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  52. var table: Table

    Permalink

    hive table to be written by this output

    hive table to be written by this output

    Definition Classes
    HiveTableDataObject → TableDataObject
  53. var tableSchema: StructType

    Permalink
    Definition Classes
    TableDataObject
  54. def toStringShort: String

    Permalink
    Definition Classes
    DataObject
  55. def validateSchemaMin(df: DataFrame): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  56. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  57. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  58. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  59. def writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink
    Definition Classes
    HiveTableDataObject → CanWriteDataFrame
  60. def writeStreamingDataFrame(df: DataFrame, trigger: Trigger, options: Map[String, String], checkpointLocation: String, queryName: String, outputMode: OutputMode = OutputMode.Append)(implicit session: SparkSession): StreamingQuery

    Permalink

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).

    df

    The Streaming DataFrame to write

    trigger

    Trigger frequency for stream

    checkpointLocation

    location for checkpoints of streaming query

    Definition Classes
    CanWriteDataFrame

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from CanHandlePartitions

Inherited from CanWriteDataFrame

Inherited from TableDataObject

Inherited from SchemaValidation

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped