Class SQLContext

Object
org.apache.spark.sql.SQLContext
All Implemented Interfaces:
Serializable, org.apache.spark.internal.Logging

public abstract class SQLContext extends Object implements org.apache.spark.internal.Logging, Serializable
The entry point for working with structured data (rows and columns) in Spark 1.x.

As of Spark 2.0, this is replaced by SparkSession. However, we are keeping the class here for backward compatibility.

Since:
1.0.0
See Also:
  • Method Details

    • getOrCreate

      public static SQLContext getOrCreate(SparkContext sparkContext)
      Deprecated.
      Use SparkSession.builder instead. Since 2.0.0.
    • setActive

      public static void setActive(SQLContextCompanion sqlContext)
    • clearActive

      public static void clearActive()
    • parquetFile

      public Dataset<Row> parquetFile(String... paths)
      Deprecated.
      As of 1.4.0, replaced by read().parquet().
      Loads a Parquet file, returning the result as a DataFrame. This function returns an empty DataFrame if no paths are passed in.

      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
    • sparkSession

      public SparkSession sparkSession()
    • sparkContext

      public SparkContext sparkContext()
    • newSession

      public abstract SQLContext newSession()
      Returns a SQLContext as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same SparkContext, cached data and other things.

      Returns:
      (undocumented)
      Since:
      1.6.0
    • listenerManager

      public abstract ExecutionListenerManager listenerManager()
      An interface to register custom QueryExecutionListener that listen for execution metrics.
      Returns:
      (undocumented)
    • setConf

      public abstract void setConf(Properties props)
      Set Spark SQL configuration properties.

      Parameters:
      props - (undocumented)
      Since:
      1.0.0
    • setConf

      public void setConf(String key, String value)
      Set the given Spark SQL configuration property.

      Parameters:
      key - (undocumented)
      value - (undocumented)
      Since:
      1.0.0
    • getConf

      public String getConf(String key)
      Return the value of Spark SQL configuration property for the given key.

      Parameters:
      key - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.0.0
    • getConf

      public String getConf(String key, String defaultValue)
      Return the value of Spark SQL configuration property for the given key. If the key is not set yet, return defaultValue.

      Parameters:
      key - (undocumented)
      defaultValue - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.0.0
    • getAllConfs

      public scala.collection.immutable.Map<String,String> getAllConfs()
      Return all the configuration properties that have been set (i.e. not the default). This creates a new copy of the config properties in the form of a Map.

      Returns:
      (undocumented)
      Since:
      1.0.0
    • experimental

      public abstract ExperimentalMethods experimental()
      :: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.

      Returns:
      (undocumented)
      Since:
      1.3.0
    • emptyDataFrame

      public Dataset<Row> emptyDataFrame()
      Returns a DataFrame with no rows or columns.

      Returns:
      (undocumented)
      Since:
      1.3.0
    • udf

      public abstract UDFRegistration udf()
      A collection of methods for registering user-defined functions (UDF).

      The following example registers a Scala closure as UDF:

      
         sqlContext.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)
       

      The following example registers a UDF in Java:

      
         sqlContext.udf().register("myUDF",
             (Integer arg1, String arg2) -> arg2 + arg1,
             DataTypes.StringType);
       

      Returns:
      (undocumented)
      Since:
      1.3.0
      Note:
      The user-defined functions must be deterministic. Due to optimization, duplicate invocations may be eliminated or the function may even be invoked more times than it is present in the query.

    • implicits

      public abstract SQLImplicits implicits()
      (Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrames.

      
         val sqlContext = new SQLContext(sc)
         import sqlContext.implicits._
       

      Returns:
      (undocumented)
      Since:
      1.3.0
    • isCached

      public boolean isCached(String tableName)
      Returns true if the table is currently cached in-memory.
      Parameters:
      tableName - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • cacheTable

      public void cacheTable(String tableName)
      Caches the specified table in-memory.
      Parameters:
      tableName - (undocumented)
      Since:
      1.3.0
    • uncacheTable

      public void uncacheTable(String tableName)
      Removes the specified table from the in-memory cache.
      Parameters:
      tableName - (undocumented)
      Since:
      1.3.0
    • clearCache

      public void clearCache()
      Removes all cached tables from the in-memory cache.
      Since:
      1.3.0
    • createDataFrame

      public <A extends scala.Product> Dataset<Row> createDataFrame(RDD<A> rdd, scala.reflect.api.TypeTags.TypeTag<A> evidence$1)
      Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).

      Parameters:
      rdd - (undocumented)
      evidence$1 - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      public <A extends scala.Product> Dataset<Row> createDataFrame(scala.collection.immutable.Seq<A> data, scala.reflect.api.TypeTags.TypeTag<A> evidence$2)
      Creates a DataFrame from a local Seq of Product.

      Parameters:
      data - (undocumented)
      evidence$2 - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • baseRelationToDataFrame

      public Dataset<Row> baseRelationToDataFrame(BaseRelation baseRelation)
      Convert a BaseRelation created for external data sources into a DataFrame.

      Parameters:
      baseRelation - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      public Dataset<Row> createDataFrame(RDD<Row> rowRDD, StructType schema)
      :: DeveloperApi :: Creates a DataFrame from an RDD containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided RDD matches the provided schema. Otherwise, there will be runtime exception. Example:
      
        import org.apache.spark.sql._
        import org.apache.spark.sql.types._
        val sqlContext = new org.apache.spark.sql.SQLContext(sc)
      
        val schema =
          StructType(
            StructField("name", StringType, false) ::
            StructField("age", IntegerType, true) :: Nil)
      
        val people =
          sc.textFile("examples/src/main/resources/people.txt").map(
            _.split(",")).map(p => Row(p(0), p(1).trim.toInt))
        val dataFrame = sqlContext.createDataFrame(people, schema)
        dataFrame.printSchema
        // root
        // |-- name: string (nullable = false)
        // |-- age: integer (nullable = true)
      
        dataFrame.createOrReplaceTempView("people")
        sqlContext.sql("select name from people").collect.foreach(println)
       

      Parameters:
      rowRDD - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataset

      public <T> Dataset<T> createDataset(scala.collection.immutable.Seq<T> data, Encoder<T> evidence$3)
      Creates a Dataset from a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

      ==Example==

      
      
         import spark.implicits._
         case class Person(name: String, age: Long)
         val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19))
         val ds = spark.createDataset(data)
      
         ds.show()
         // +-------+---+
         // |   name|age|
         // +-------+---+
         // |Michael| 29|
         // |   Andy| 30|
         // | Justin| 19|
         // +-------+---+
       

      Parameters:
      data - (undocumented)
      evidence$3 - (undocumented)
      Returns:
      (undocumented)
      Since:
      2.0.0
    • createDataset

      public <T> Dataset<T> createDataset(RDD<T> data, Encoder<T> evidence$4)
      Creates a Dataset from an RDD of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

      Parameters:
      data - (undocumented)
      evidence$4 - (undocumented)
      Returns:
      (undocumented)
      Since:
      2.0.0
    • createDataset

      public <T> Dataset<T> createDataset(List<T> data, Encoder<T> evidence$5)
      Creates a Dataset from a JList of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

      ==Java Example==

      
           List<String> data = Arrays.asList("hello", "world");
           Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
       

      Parameters:
      data - (undocumented)
      evidence$5 - (undocumented)
      Returns:
      (undocumented)
      Since:
      2.0.0
    • createDataFrame

      public Dataset<Row> createDataFrame(JavaRDD<Row> rowRDD, StructType schema)
      :: DeveloperApi :: Creates a DataFrame from a JavaRDD containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided RDD matches the provided schema. Otherwise, there will be runtime exception.

      Parameters:
      rowRDD - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      public Dataset<Row> createDataFrame(List<Row> rows, StructType schema)
      :: DeveloperApi :: Creates a DataFrame from a JList containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided List matches the provided schema. Otherwise, there will be runtime exception.

      Parameters:
      rows - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.6.0
    • createDataFrame

      public Dataset<Row> createDataFrame(RDD<?> rdd, Class<?> beanClass)
      Applies a schema to an RDD of Java Beans.

      WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

      Parameters:
      rdd - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      public Dataset<Row> createDataFrame(JavaRDD<?> rdd, Class<?> beanClass)
      Applies a schema to an RDD of Java Beans.

      WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

      Parameters:
      rdd - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createDataFrame

      public Dataset<Row> createDataFrame(List<?> data, Class<?> beanClass)
      Applies a schema to a List of Java Beans.

      WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

      Parameters:
      data - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.6.0
    • read

      public abstract DataFrameReader read()
      Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.
      
         sqlContext.read.parquet("/path/to/file.parquet")
         sqlContext.read.schema(schema).json("/path/to/file.json")
       

      Returns:
      (undocumented)
      Since:
      1.4.0
    • readStream

      public abstract DataStreamReader readStream()
      Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.
      
         sparkSession.readStream.parquet("/path/to/directory/of/parquet/files")
         sparkSession.readStream.schema(schema).json("/path/to/directory/of/json/files")
       

      Returns:
      (undocumented)
      Since:
      2.0.0
    • createExternalTable

      public Dataset<Row> createExternalTable(String tableName, String path)
      Deprecated.
      use sparkSession.catalog.createTable instead. Since 2.2.0.
      Creates an external table from the given path and returns the corresponding DataFrame. It will use the default data source configured by spark.sql.sources.default.

      Parameters:
      tableName - (undocumented)
      path - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      public Dataset<Row> createExternalTable(String tableName, String path, String source)
      Deprecated.
      use sparkSession.catalog.createTable instead. Since 2.2.0.
      Creates an external table from the given path based on a data source and returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      path - (undocumented)
      source - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      public Dataset<Row> createExternalTable(String tableName, String source, Map<String,String> options)
      Deprecated.
      use sparkSession.catalog.createTable instead. Since 2.2.0.
      Creates an external table from the given path based on a data source and a set of options. Then, returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      source - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      public Dataset<Row> createExternalTable(String tableName, String source, scala.collection.immutable.Map<String,String> options)
      Deprecated.
      use sparkSession.catalog.createTable instead. Since 2.2.0.
      (Scala-specific) Creates an external table from the given path based on a data source and a set of options. Then, returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      source - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      public Dataset<Row> createExternalTable(String tableName, String source, StructType schema, Map<String,String> options)
      Deprecated.
      use sparkSession.catalog.createTable instead. Since 2.2.0.
      Create an external table from the given path based on a data source, a schema and a set of options. Then, returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      source - (undocumented)
      schema - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • createExternalTable

      public Dataset<Row> createExternalTable(String tableName, String source, StructType schema, scala.collection.immutable.Map<String,String> options)
      Deprecated.
      use sparkSession.catalog.createTable instead. Since 2.2.0.
      (Scala-specific) Create an external table from the given path based on a data source, a schema and a set of options. Then, returns the corresponding DataFrame.

      Parameters:
      tableName - (undocumented)
      source - (undocumented)
      schema - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • dropTempTable

      public void dropTempTable(String tableName)
      Drops the temporary table with the given table name in the catalog. If the table has been cached/persisted before, it's also unpersisted.

      Parameters:
      tableName - the name of the table to be unregistered.
      Since:
      1.3.0
    • range

      public Dataset<Row> range(long end)
      Creates a DataFrame with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

      Parameters:
      end - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.4.1
    • range

      public Dataset<Row> range(long start, long end)
      Creates a DataFrame with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

      Parameters:
      start - (undocumented)
      end - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.4.0
    • range

      public Dataset<Row> range(long start, long end, long step)
      Creates a DataFrame with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

      Parameters:
      start - (undocumented)
      end - (undocumented)
      step - (undocumented)
      Returns:
      (undocumented)
      Since:
      2.0.0
    • range

      public Dataset<Row> range(long start, long end, long step, int numPartitions)
      Creates a DataFrame with a single LongType column named id, containing elements in an range from start to end (exclusive) with an step value, with partition number specified.

      Parameters:
      start - (undocumented)
      end - (undocumented)
      step - (undocumented)
      numPartitions - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.4.0
    • sql

      public Dataset<Row> sql(String sqlText)
      Executes a SQL query using Spark, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

      Parameters:
      sqlText - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • table

      public Dataset<Row> table(String tableName)
      Returns the specified table as a DataFrame.

      Parameters:
      tableName - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • tables

      public Dataset<Row> tables()
      Returns a DataFrame containing names of existing tables in the current database. The returned DataFrame has three columns, database, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).

      Returns:
      (undocumented)
      Since:
      1.3.0
    • tables

      public Dataset<Row> tables(String databaseName)
      Returns a DataFrame containing names of existing tables in the given database. The returned DataFrame has three columns, database, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).

      Parameters:
      databaseName - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • streams

      public abstract StreamingQueryManager streams()
      Returns a StreamingQueryManager that allows managing all the StreamingQueries active on this context.

      Returns:
      (undocumented)
      Since:
      2.0.0
    • tableNames

      public String[] tableNames()
      Returns the names of tables in the current database as an array.

      Returns:
      (undocumented)
      Since:
      1.3.0
    • tableNames

      public String[] tableNames(String databaseName)
      Returns the names of tables in the given database as an array.

      Parameters:
      databaseName - (undocumented)
      Returns:
      (undocumented)
      Since:
      1.3.0
    • applySchema

      public Dataset<Row> applySchema(RDD<Row> rowRDD, StructType schema)
      Deprecated.
      As of 1.3.0, replaced by createDataFrame().
      Parameters:
      rowRDD - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • applySchema

      public Dataset<Row> applySchema(JavaRDD<Row> rowRDD, StructType schema)
      Deprecated.
      As of 1.3.0, replaced by createDataFrame().
      Parameters:
      rowRDD - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • applySchema

      public Dataset<Row> applySchema(RDD<?> rdd, Class<?> beanClass)
      Deprecated.
      As of 1.3.0, replaced by createDataFrame().
      Parameters:
      rdd - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
    • applySchema

      public Dataset<Row> applySchema(JavaRDD<?> rdd, Class<?> beanClass)
      Deprecated.
      As of 1.3.0, replaced by createDataFrame().
      Parameters:
      rdd - (undocumented)
      beanClass - (undocumented)
      Returns:
      (undocumented)
    • parquetFile

      public Dataset<Row> parquetFile(scala.collection.immutable.Seq<String> paths)
      Deprecated.
      As of 1.4.0, replaced by read().parquet().
      Loads a Parquet file, returning the result as a DataFrame. This function returns an empty DataFrame if no paths are passed in.

      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
    • jsonFile

      public Dataset<Row> jsonFile(String path)
      Deprecated.
      As of 1.4.0, replaced by read().json().
      Loads a JSON file (one object per line), returning the result as a DataFrame. It goes through the entire dataset once to determine the schema.

      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
    • jsonFile

      public Dataset<Row> jsonFile(String path, StructType schema)
      Deprecated.
      As of 1.4.0, replaced by read().json().
      Loads a JSON file (one object per line) and applies the given schema, returning the result as a DataFrame.

      Parameters:
      path - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • jsonFile

      public Dataset<Row> jsonFile(String path, double samplingRatio)
      Deprecated.
      As of 1.4.0, replaced by read().json().
      Parameters:
      path - (undocumented)
      samplingRatio - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      public Dataset<Row> jsonRDD(RDD<String> json)
      Deprecated.
      As of 1.4.0, replaced by read().json().
      Loads an RDD[String] storing JSON objects (one object per record), returning the result as a DataFrame. It goes through the entire dataset once to determine the schema.

      Parameters:
      json - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      public Dataset<Row> jsonRDD(JavaRDD<String> json)
      Deprecated.
      As of 1.4.0, replaced by read().json().
      Loads an RDD[String] storing JSON objects (one object per record), returning the result as a DataFrame. It goes through the entire dataset once to determine the schema.

      Parameters:
      json - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      public Dataset<Row> jsonRDD(RDD<String> json, StructType schema)
      Deprecated.
      As of 1.4.0, replaced by read().json().
      Loads an RDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a DataFrame.

      Parameters:
      json - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      public Dataset<Row> jsonRDD(JavaRDD<String> json, StructType schema)
      Deprecated.
      As of 1.4.0, replaced by read().json().
      Loads an JavaRDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a DataFrame.

      Parameters:
      json - (undocumented)
      schema - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      public Dataset<Row> jsonRDD(RDD<String> json, double samplingRatio)
      Deprecated.
      As of 1.4.0, replaced by read().json().
      Loads an RDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a DataFrame.

      Parameters:
      json - (undocumented)
      samplingRatio - (undocumented)
      Returns:
      (undocumented)
    • jsonRDD

      public Dataset<Row> jsonRDD(JavaRDD<String> json, double samplingRatio)
      Deprecated.
      As of 1.4.0, replaced by read().json().
      Loads a JavaRDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a DataFrame.

      Parameters:
      json - (undocumented)
      samplingRatio - (undocumented)
      Returns:
      (undocumented)
    • load

      public Dataset<Row> load(String path)
      Deprecated.
      As of 1.4.0, replaced by read().load(path).
      Returns the dataset stored at path as a DataFrame, using the default data source configured by spark.sql.sources.default.

      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
    • load

      public Dataset<Row> load(String path, String source)
      Deprecated.
      As of 1.4.0, replaced by read().format(source).load(path).
      Returns the dataset stored at path as a DataFrame, using the given data source.

      Parameters:
      path - (undocumented)
      source - (undocumented)
      Returns:
      (undocumented)
    • load

      public Dataset<Row> load(String source, Map<String,String> options)
      Deprecated.
      As of 1.4.0, replaced by read().format(source).options(options).load().
      (Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.

      Parameters:
      source - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
    • load

      public Dataset<Row> load(String source, scala.collection.immutable.Map<String,String> options)
      Deprecated.
      As of 1.4.0, replaced by read().format(source).options(options).load().
      (Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.

      Parameters:
      source - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
    • load

      public Dataset<Row> load(String source, StructType schema, Map<String,String> options)
      Deprecated.
      As of 1.4.0, replaced by read().format(source).schema(schema).options(options).load().
      (Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.

      Parameters:
      source - (undocumented)
      schema - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
    • load

      public Dataset<Row> load(String source, StructType schema, scala.collection.immutable.Map<String,String> options)
      Deprecated.
      As of 1.4.0, replaced by read().format(source).schema(schema).options(options).load().
      (Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.

      Parameters:
      source - (undocumented)
      schema - (undocumented)
      options - (undocumented)
      Returns:
      (undocumented)
    • jdbc

      public Dataset<Row> jdbc(String url, String table)
      Deprecated.
      As of 1.4.0, replaced by read().jdbc().
      Construct a DataFrame representing the database table accessible via JDBC URL url named table.

      Parameters:
      url - (undocumented)
      table - (undocumented)
      Returns:
      (undocumented)
    • jdbc

      public Dataset<Row> jdbc(String url, String table, String columnName, long lowerBound, long upperBound, int numPartitions)
      Deprecated.
      As of 1.4.0, replaced by read().jdbc().
      Construct a DataFrame representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.

      Parameters:
      columnName - the name of a column of integral type that will be used for partitioning.
      lowerBound - the minimum value of columnName used to decide partition stride
      upperBound - the maximum value of columnName used to decide partition stride
      numPartitions - the number of partitions. the range minValue-maxValue will be split evenly into this many partitions
      url - (undocumented)
      table - (undocumented)
      Returns:
      (undocumented)
    • jdbc

      public Dataset<Row> jdbc(String url, String table, String[] theParts)
      Deprecated.
      As of 1.4.0, replaced by read().jdbc().
      Construct a DataFrame representing the database table accessible via JDBC URL url named table. The theParts parameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of the DataFrame.

      Parameters:
      url - (undocumented)
      table - (undocumented)
      theParts - (undocumented)
      Returns:
      (undocumented)