Packages

c

org.apache.spark.sql

TableValuedFunction

abstract class TableValuedFunction extends AnyRef

Interface for invoking table-valued functions in Spark SQL.

Source
TableValuedFunction.scala
Since

4.0.0

Linear Supertypes
AnyRef, Any
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. TableValuedFunction
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new TableValuedFunction()

Abstract Value Members

  1. abstract def collations(): Dataset[Row]

    Gets all of the Spark SQL string collations.

    Gets all of the Spark SQL string collations.

    Since

    4.0.0

  2. abstract def explode(collection: Column): Dataset[Row]

    Creates a DataFrame containing a new row for each element in the given array or map column.

    Creates a DataFrame containing a new row for each element in the given array or map column. Uses the default column name col for elements in the array and key and value for elements in the map unless specified otherwise.

    Since

    4.0.0

  3. abstract def explode_outer(collection: Column): Dataset[Row]

    Creates a DataFrame containing a new row for each element in the given array or map column.

    Creates a DataFrame containing a new row for each element in the given array or map column. Uses the default column name col for elements in the array and key and value for elements in the map unless specified otherwise. Unlike explode, if the array/map is null or empty then null is produced.

    Since

    4.0.0

  4. abstract def inline(input: Column): Dataset[Row]

    Creates a DataFrame containing a new row for each element in the given array of structs.

    Creates a DataFrame containing a new row for each element in the given array of structs.

    Since

    4.0.0

  5. abstract def inline_outer(input: Column): Dataset[Row]

    Creates a DataFrame containing a new row for each element in the given array of structs.

    Creates a DataFrame containing a new row for each element in the given array of structs. Unlike inline, if the array is null or empty then null is produced for each nested column.

    Since

    4.0.0

  6. abstract def json_tuple(input: Column, fields: Column*): Dataset[Row]

    Creates a DataFrame containing a new row for a json column according to the given field names.

    Creates a DataFrame containing a new row for a json column according to the given field names.

    Annotations
    @varargs()
    Since

    4.0.0

  7. abstract def posexplode(collection: Column): Dataset[Row]

    Creates a DataFrame containing a new row for each element with position in the given array or map column.

    Creates a DataFrame containing a new row for each element with position in the given array or map column. Uses the default column name pos for position, and col for elements in the array and key and value for elements in the map unless specified otherwise.

    Since

    4.0.0

  8. abstract def posexplode_outer(collection: Column): Dataset[Row]

    Creates a DataFrame containing a new row for each element with position in the given array or map column.

    Creates a DataFrame containing a new row for each element with position in the given array or map column. Uses the default column name pos for position, and col for elements in the array and key and value for elements in the map unless specified otherwise. Unlike posexplode, if the array/map is null or empty then the row (null, null) is produced.

    Since

    4.0.0

  9. abstract def range(start: Long, end: Long, step: Long, numPartitions: Int): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value, with partition number specified.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value, with partition number specified.

    Since

    4.0.0

  10. abstract def range(start: Long, end: Long, step: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

    Since

    4.0.0

  11. abstract def range(start: Long, end: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

    Since

    4.0.0

  12. abstract def range(end: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

    Creates a Dataset with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

    Since

    4.0.0

  13. abstract def sql_keywords(): Dataset[Row]

    Gets Spark SQL keywords.

    Gets Spark SQL keywords.

    Since

    4.0.0

  14. abstract def stack(n: Column, fields: Column*): Dataset[Row]

    Separates col1, ..., colk into n rows.

    Separates col1, ..., colk into n rows. Uses column names col0, col1, etc. by default unless specified otherwise.

    Annotations
    @varargs()
    Since

    4.0.0

  15. abstract def variant_explode(input: Column): Dataset[Row]

    Separates a variant object/array into multiple rows containing its fields/elements.

    Separates a variant object/array into multiple rows containing its fields/elements. Its result schema is struct<pos int, key string, value variant>. pos is the position of the field/element in its parent object/array, and value is the field/element value. key is the field name when exploding a variant object, or is NULL when exploding a variant array. It ignores any input that is not a variant array/object, including SQL NULL, variant null, and any other variant values.

    Since

    4.0.0

  16. abstract def variant_explode_outer(input: Column): Dataset[Row]

    Separates a variant object/array into multiple rows containing its fields/elements.

    Separates a variant object/array into multiple rows containing its fields/elements. Its result schema is struct<pos int, key string, value variant>. pos is the position of the field/element in its parent object/array, and value is the field/element value. key is the field name when exploding a variant object, or is NULL when exploding a variant array. Unlike variant_explode, if the given variant is not a variant array/object, including SQL NULL, variant null, and any other variant values, then NULL is produced.

    Since

    4.0.0

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  8. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  9. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  10. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  11. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  12. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  13. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  14. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  15. def toString(): String
    Definition Classes
    AnyRef → Any
  16. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  17. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  18. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from AnyRef

Inherited from Any

generator_funcs

json_funcs

table_funcs

variant_funcs

Ungrouped