ADB Spark Connector overview

ADB Spark Connector provides the ability of high speed parallel data exchange between Apache Spark and Arenadata DB.

Architecture

Each Spark application consists of a controlling process (driver) and a number of distributed worker processes (executors). The interaction scheme is provided below.

architecture dark
ADB Spark Connector architecture
architecture light
ADB Spark Connector architecture

Data reading

ADB Spark Connector initializes a driver in order to load an ADB table into Spark. That driver establishes a connection to the ADB master via JDBC to obtain the necessary metadata about the table. The metadata contains information on type, tables structure, and distribution key. The distribution key allows you to distribute the data from a table to the existing Spark performers.

The data is stored at ADB segments. A Spark application that uses a connector can transfer data from each segment into one or more Spark partitions. There are five partition modes:

  • According to the gp_segment_id. The data from ADB tables is read and distributed across Spark partitions depending on a segment. The number of Spark partitions corresponds to the number of active ADB segments. This partition mode does not require additional parameters and is used by default.

  • According to the specified column and the number of partitions. Data is divided into Spark partitions inside the column range by the specified partitions number. The number of Spark partitions corresponds to the number of ranges. It is necessary to set the appropriate parameters (Spark connector options). This mode has limitations: only the integer and the date/time table fields can be used as a column.

  • According to the specified column. Data is divided into Spark partitions depending on the unique values of the specified column. The number of Spark partitions corresponds to the number of unique values of the specified column. It is necessary to set the appropriate parameters (Spark connector options). This mode is recommended for the case of a small and limited set of values.

  • According to the specified number of partitions. Data is divided into Spark partitions depending on the specified number of partitions. It is necessary to set the appropriate parameters (Spark connector options).

  • According to the specified hash function and the number of partitions. Data is divided into Spark partitions depending on the specified hash function and the specified number of partitions. It is necessary to set the appropriate parameters (Spark connector options). This mode has limitations: only the expression that returns nothing but an integer can be used as a hash function.

Some data processing tasks are spread across all Spark executors along with the corresponding partition that was created before. The data exchange is performed in parallel via external writable tables for every partition.

reading dark
Data reading algorithm
reading light
Data reading algorithm

Data writing

ADB Spark Connector initializes a driver to load the table from Spark to ADB. After that, the driver establishes a connection to the ADB master via JDBC and prepares the load of ADB. The load method depends on the writing mode.

The following writing modes are supported:

  • overwrite. Depending on spark.db.table.truncate, the target table is either completely deleted, or all data is cleaned up before the writing process.

  • append. The additional data is written to the target table.

  • errorIfExists. If the target table exists, the error appears.

A number of the spark.db.create.table options are also available. This allows you to specify additional settings when you create a target table (Spark connector options).

A task to load the data is created for each Spark partition. The data exchange is performed in parallel via external readable tables for every segment.

writing dark
Data writing algorithm
writing light
Data writing algorithm

Supported data types

Matching of data types during a transfer is shown in the tables below.

Transfer from ADB to Spark
ADB Spark

bigint

LongType

bigSerial

LongType

bit

StringType

bytea

BinaryType

boolean

BooleanType

char

StringType

date

DateType

decimal

DecimalType

float4

FloatType

float8

DoubleType

int

IntegerType

interval

CalendarIntervalType

serial

IntegerType

smallInt

ShortType

text

StringType

time

TimeStampType

timestamp

TimeStampType

timestamptz

TimeStampType

timetz

TimeStampType

varchar

StringType

Transfer from Spark to ADB
Spark ADB

BinaryType

byte

BooleanType

boolean

CalendarIntervalType

interval

DateType

date

DecimalType

numeric

DoubleType

float8

FloatType

float4

IntegerType

int

LongType

bigint

ShortType

smallint

StringType

text

TimeStampType

timestamp

Found a mistake? Seleсt text and press Ctrl+Enter to report it