@Internal public final class StreamTableEnvironmentImpl extends TableEnvironmentImpl implements StreamTableEnvironment
StreamTableEnvironment
. This enables conversions from/to
DataStream
.
It binds to a given StreamExecutionEnvironment
.
execEnv, functionCatalog, planner, tableConfig
Constructor and Description |
---|
StreamTableEnvironmentImpl(CatalogManager catalogManager,
ModuleManager moduleManager,
FunctionCatalog functionCatalog,
TableConfig tableConfig,
StreamExecutionEnvironment executionEnvironment,
Planner planner,
Executor executor,
boolean isStreamingMode,
ClassLoader userClassLoader) |
Modifier and Type | Method and Description |
---|---|
StreamTableDescriptor |
connect(ConnectorDescriptor connectorDescriptor)
Creates a temporary table from a descriptor.
|
static StreamTableEnvironment |
create(StreamExecutionEnvironment executionEnvironment,
EnvironmentSettings settings,
TableConfig tableConfig) |
<T> void |
createTemporaryView(String path,
DataStream<T> dataStream)
Creates a view from the given
DataStream in a given path. |
<T> void |
createTemporaryView(String path,
DataStream<T> dataStream,
Expression... fields)
Creates a view from the given
DataStream in a given path with specified field names. |
<T> void |
createTemporaryView(String path,
DataStream<T> dataStream,
Schema schema)
Creates a view from the given
DataStream in a given path. |
<T> void |
createTemporaryView(String path,
DataStream<T> dataStream,
String fields)
Creates a view from the given
DataStream in a given path with specified field names. |
StreamExecutionEnvironment |
execEnv()
This is a temporary workaround for Python API.
|
<T> Table |
fromDataStream(DataStream<T> dataStream)
Converts the given
DataStream into a Table . |
<T> Table |
fromDataStream(DataStream<T> dataStream,
Expression... fields)
Converts the given
DataStream into a Table with specified field names. |
<T> Table |
fromDataStream(DataStream<T> dataStream,
Schema schema)
Converts the given
DataStream into a Table . |
<T> Table |
fromDataStream(DataStream<T> dataStream,
String fields)
Converts the given
DataStream into a Table with specified field names. |
Pipeline |
getPipeline(String jobName)
This method is used for sql client to submit job.
|
protected QueryOperation |
qualifyQueryOperation(ObjectIdentifier identifier,
QueryOperation queryOperation)
Subclasses can override this method to transform the given QueryOperation to a new one with
the qualified object identifier.
|
<T> void |
registerDataStream(String name,
DataStream<T> dataStream)
Creates a view from the given
DataStream . |
<T> void |
registerDataStream(String name,
DataStream<T> dataStream,
String fields)
Creates a view from the given
DataStream in a given path with specified field names. |
<T,ACC> void |
registerFunction(String name,
AggregateFunction<T,ACC> aggregateFunction)
Registers an
AggregateFunction under a unique name in the TableEnvironment's catalog. |
<T,ACC> void |
registerFunction(String name,
TableAggregateFunction<T,ACC> tableAggregateFunction)
Registers an
TableAggregateFunction under a unique name in the TableEnvironment's
catalog. |
<T> void |
registerFunction(String name,
TableFunction<T> tableFunction)
Registers a
TableFunction under a unique name in the TableEnvironment's catalog. |
<T> DataStream<T> |
toAppendStream(Table table,
Class<T> clazz)
Converts the given
Table into an append DataStream of a specified type. |
<T> DataStream<T> |
toAppendStream(Table table,
TypeInformation<T> typeInfo)
Converts the given
Table into an append DataStream of a specified type. |
DataStream<Row> |
toDataStream(Table table)
Converts the given
Table into a DataStream . |
<T> DataStream<T> |
toDataStream(Table table,
AbstractDataType<?> targetDataType)
|
<T> DataStream<T> |
toDataStream(Table table,
Class<T> targetClass)
|
<T> DataStream<Tuple2<Boolean,T>> |
toRetractStream(Table table,
Class<T> clazz)
Converts the given
Table into a DataStream of add and retract messages. |
<T> DataStream<Tuple2<Boolean,T>> |
toRetractStream(Table table,
TypeInformation<T> typeInfo)
Converts the given
Table into a DataStream of add and retract messages. |
protected void |
validateTableSource(TableSource<?> tableSource)
Subclasses can override this method to add additional checks.
|
create, create, createFunction, createFunction, createStatementSet, createTable, createTemporaryFunction, createTemporaryFunction, createTemporarySystemFunction, createTemporarySystemFunction, createTemporaryView, dropFunction, dropTemporaryFunction, dropTemporarySystemFunction, dropTemporaryTable, dropTemporaryView, execute, executeInternal, executeInternal, executeJsonPlan, executeSql, explain, explain, explain, explainInternal, explainJsonPlan, explainSql, from, fromTableSource, fromValues, fromValues, fromValues, fromValues, fromValues, fromValues, getCatalog, getCatalogManager, getCompletionHints, getConfig, getCurrentCatalog, getCurrentDatabase, getExplainDetails, getJsonPlan, getJsonPlan, getOperationTreeBuilder, getParser, getPlanner, insertInto, insertInto, listCatalogs, listDatabases, listFullModules, listFunctions, listModules, listTables, listTemporaryTables, listTemporaryViews, listUserDefinedFunctions, listViews, loadModule, registerCatalog, registerFunction, registerTable, registerTableSinkInternal, registerTableSourceInternal, scan, sqlQuery, sqlUpdate, translateAndClearBuffer, unloadModule, useCatalog, useDatabase, useModules
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
create, create, create, execute
create, create, createFunction, createFunction, createStatementSet, createTemporaryFunction, createTemporaryFunction, createTemporarySystemFunction, createTemporarySystemFunction, createTemporaryView, dropFunction, dropTemporaryFunction, dropTemporarySystemFunction, dropTemporaryTable, dropTemporaryView, executeSql, explain, explain, explain, explainSql, from, fromTableSource, fromValues, fromValues, fromValues, fromValues, fromValues, fromValues, getCatalog, getCompletionHints, getConfig, getCurrentCatalog, getCurrentDatabase, insertInto, insertInto, listCatalogs, listDatabases, listFullModules, listFunctions, listModules, listTables, listTemporaryTables, listTemporaryViews, listUserDefinedFunctions, listViews, loadModule, registerCatalog, registerFunction, registerTable, scan, sqlQuery, sqlUpdate, unloadModule, useCatalog, useDatabase, useModules
public StreamTableEnvironmentImpl(CatalogManager catalogManager, ModuleManager moduleManager, FunctionCatalog functionCatalog, TableConfig tableConfig, StreamExecutionEnvironment executionEnvironment, Planner planner, Executor executor, boolean isStreamingMode, ClassLoader userClassLoader)
public static StreamTableEnvironment create(StreamExecutionEnvironment executionEnvironment, EnvironmentSettings settings, TableConfig tableConfig)
public <T> void registerFunction(String name, TableFunction<T> tableFunction)
StreamTableEnvironment
TableFunction
under a unique name in the TableEnvironment's catalog.
Registered functions can be referenced in Table API and SQL queries.registerFunction
in interface StreamTableEnvironment
T
- The type of the output row.name
- The name under which the function is registered.tableFunction
- The TableFunction to register.public <T,ACC> void registerFunction(String name, AggregateFunction<T,ACC> aggregateFunction)
StreamTableEnvironment
AggregateFunction
under a unique name in the TableEnvironment's catalog.
Registered functions can be referenced in Table API and SQL queries.registerFunction
in interface StreamTableEnvironment
T
- The type of the output value.ACC
- The type of aggregate accumulator.name
- The name under which the function is registered.aggregateFunction
- The AggregateFunction to register.public <T,ACC> void registerFunction(String name, TableAggregateFunction<T,ACC> tableAggregateFunction)
StreamTableEnvironment
TableAggregateFunction
under a unique name in the TableEnvironment's
catalog. Registered functions can only be referenced in Table API.registerFunction
in interface StreamTableEnvironment
T
- The type of the output value.ACC
- The type of aggregate accumulator.name
- The name under which the function is registered.tableAggregateFunction
- The TableAggregateFunction to register.public <T> Table fromDataStream(DataStream<T> dataStream)
StreamTableEnvironment
DataStream
into a Table
.
Column names and types of the Table
are automatically derived from the TypeInformation
of the DataStream
. If the outermost record's TypeInformation
is a CompositeType
, it will be flattened in the first level. TypeInformation
that cannot be represented as one of the listed DataTypes
will be treated as a
black-box DataTypes.RAW(Class, TypeSerializer)
type. Thus, composite nested fields
will not be accessible.
Since the DataStream API does not support changelog processing natively, this method
assumes append-only/insert-only semantics during the stream-to-table conversion. Records of
type Row
must describe RowKind.INSERT
changes.
By default, the stream record's timestamp and watermarks are not propagated unless
explicitly declared via StreamTableEnvironment.fromDataStream(DataStream, Schema)
.
fromDataStream
in interface StreamTableEnvironment
T
- The external type of the DataStream
.dataStream
- The DataStream
to be converted.Table
.public <T> Table fromDataStream(DataStream<T> dataStream, Schema schema)
StreamTableEnvironment
DataStream
into a Table
.
Column names and types of the Table
are automatically derived from the TypeInformation
of the DataStream
. If the outermost record's TypeInformation
is a CompositeType
, it will be flattened in the first level. TypeInformation
that cannot be represented as one of the listed DataTypes
will be treated as a
black-box DataTypes.RAW(Class, TypeSerializer)
type. Thus, composite nested fields
will not be accessible.
Since the DataStream API does not support changelog processing natively, this method
assumes append-only/insert-only semantics during the stream-to-table conversion. Records of
class Row
must describe RowKind.INSERT
changes.
By default, the stream record's timestamp and watermarks are not propagated unless explicitly declared.
This method allows to declare a Schema
for the resulting table. The declaration is
similar to a CREATE TABLE
DDL in SQL and allows to:
DataType
DataStream
watermarks
It is possible to declare a schema without physical/regular columns. In this case, those columns will be automatically derived and implicitly put at the beginning of the schema declaration.
The following examples illustrate common schema declarations and their semantics:
// given a DataStream of Tuple2 < String , BigDecimal > // === EXAMPLE 1 === // no physical columns defined, they will be derived automatically, // e.g. BigDecimal becomes DECIMAL(38, 18) Schema.newBuilder() .columnByExpression("c1", "f1 + 42") .columnByExpression("c2", "f1 - 1") .build() // equal to: CREATE TABLE (f0 STRING, f1 DECIMAL(38, 18), c1 AS f1 + 42, c2 AS f1 - 1) // === EXAMPLE 2 === // physical columns defined, input fields and columns will be mapped by name, // columns are reordered and their data type overwritten, // all columns must be defined to show up in the final table's schema Schema.newBuilder() .column("f1", "DECIMAL(10, 2)") .columnByExpression("c", "f1 - 1") .column("f0", "STRING") .build() // equal to: CREATE TABLE (f1 DECIMAL(10, 2), c AS f1 - 1, f0 STRING) // === EXAMPLE 3 === // timestamp and watermarks can be added from the DataStream API, // physical columns will be derived automatically Schema.newBuilder() .columnByMetadata("rowtime", "TIMESTAMP(3)") // extract timestamp into a table column .watermark("rowtime", "SOURCE_WATERMARK()") // declare watermarks propagation .build() // equal to: // CREATE TABLE ( // f0 STRING, // f1 DECIMAL(38, 18), // rowtime TIMESTAMP(3) METADATA, // WATERMARK FOR rowtime AS SOURCE_WATERMARK() // )
fromDataStream
in interface StreamTableEnvironment
T
- The external type of the DataStream
.dataStream
- The DataStream
to be converted.schema
- customized schema for the final table.Table
.public <T> void createTemporaryView(String path, DataStream<T> dataStream)
StreamTableEnvironment
DataStream
in a given path. Registered views can be
referenced in SQL queries.
See StreamTableEnvironment.fromDataStream(DataStream)
for more information on how a DataStream
is translated into a table.
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
createTemporaryView
in interface StreamTableEnvironment
T
- The type of the DataStream
.path
- The path under which the DataStream
is created. See also the TableEnvironment
class description for the format of the path.dataStream
- The DataStream
out of which to create the view.public <T> void createTemporaryView(String path, DataStream<T> dataStream, Schema schema)
StreamTableEnvironment
DataStream
in a given path. Registered views can be
referenced in SQL queries.
See StreamTableEnvironment.fromDataStream(DataStream, Schema)
for more information on how a DataStream
is translated into a table.
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
createTemporaryView
in interface StreamTableEnvironment
T
- The type of the DataStream
.path
- The path under which the DataStream
is created. See also the TableEnvironment
class description for the format of the path.dataStream
- The DataStream
out of which to create the view.schema
- customized schema for the final table.public DataStream<Row> toDataStream(Table table)
StreamTableEnvironment
Table
into a DataStream
.
Since the DataStream API does not support changelog processing natively, this method
assumes append-only/insert-only semantics during the table-to-stream conversion. The records
of class Row
will always describe RowKind.INSERT
changes. Updating tables are
not supported by this method and will produce an exception.
If you want to convert the Table
to a specific class or data type, use StreamTableEnvironment.toDataStream(Table, Class)
or StreamTableEnvironment.toDataStream(Table, AbstractDataType)
instead.
Note that the type system of the table ecosystem is richer than the one of the DataStream
API. The table runtime will make sure to properly serialize the output records to the first
operator of the DataStream API. Afterwards, the Types
semantics of the DataStream API
need to be considered.
If the input table contains a single rowtime column, it will be propagated into a stream record's timestamp. Watermarks will be propagated as well.
toDataStream
in interface StreamTableEnvironment
table
- The Table
to convert.DataStream
.StreamTableEnvironment.toDataStream(Table, AbstractDataType)
public <T> DataStream<T> toDataStream(Table table, Class<T> targetClass)
StreamTableEnvironment
Table
into a DataStream
of the given Class
.
See StreamTableEnvironment.toDataStream(Table, AbstractDataType)
for more information on how a Table
is translated into a DataStream
.
This method is a shortcut for:
tableEnv.toDataStream(table, DataTypes.of(class))
Calling this method with a class of Row
will redirect to StreamTableEnvironment.toDataStream(Table)
.
toDataStream
in interface StreamTableEnvironment
T
- External record.table
- The Table
to convert.targetClass
- The Class
that decides about the final external representation in
DataStream
records.DataStream
.public <T> DataStream<T> toDataStream(Table table, AbstractDataType<?> targetDataType)
StreamTableEnvironment
Table
into a DataStream
of the given DataType
.
The given DataType
is used to configure the table runtime to convert columns and
internal data structures to the desired representation. The following example shows how to
convert the table columns into the fields of a POJO type.
// given a Table of (name STRING, age INT) public static class MyPojo { public String name; public Integer age; // default constructor for DataStream API public MyPojo() {} // fully assigning constructor for field order in Table API public MyPojo(String name, Integer age) { this.name = name; this.age = age; } } tableEnv.toDataStream(table, DataTypes.of(MyPojo.class));
Since the DataStream API does not support changelog processing natively, this method assumes append-only/insert-only semantics during the table-to-stream conversion. Updating tables are not supported by this method and will produce an exception.
Note that the type system of the table ecosystem is richer than the one of the DataStream
API. The table runtime will make sure to properly serialize the output records to the first
operator of the DataStream API. Afterwards, the Types
semantics of the DataStream API
need to be considered.
If the input table contains a single rowtime column, it will be propagated into a stream record's timestamp. Watermarks will be propagated as well.
toDataStream
in interface StreamTableEnvironment
T
- External record.table
- The Table
to convert.targetDataType
- The DataType
that decides about the final external
representation in DataStream
records.DataStream
.StreamTableEnvironment.toDataStream(Table)
public <T> Table fromDataStream(DataStream<T> dataStream, String fields)
StreamTableEnvironment
DataStream
into a Table
with specified field names.
There are two modes for mapping original fields to the fields of the Table
:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// reorder the fields, rename the original 'f0' field to 'name' and add event-time
// attribute named 'rowtime'
Table table = tableEnv.fromDataStream(stream, "f1, rowtime.rowtime, f0 as 'name'");
2. Reference input fields by position: In this mode, fields are simply renamed. Event-time
attributes can replace the field on their position in the input data (if it is of correct
type) or be appended at the end. Proctime attributes must be appended at the end. This mode
can only be used if the input type has a defined field order (tuple, case class, Row) and
none of the fields
references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// rename the original fields to 'a' and 'b' and extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
Table table = tableEnv.fromDataStream(stream, "a, b, rowtime.rowtime");
fromDataStream
in interface StreamTableEnvironment
T
- The type of the DataStream
.dataStream
- The DataStream
to be converted.fields
- The fields expressions to map original fields of the DataStream to the fields
of the Table
.Table
.public <T> Table fromDataStream(DataStream<T> dataStream, Expression... fields)
StreamTableEnvironment
DataStream
into a Table
with specified field names.
There are two modes for mapping original fields to the fields of the Table
:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
Table table = tableEnv.fromDataStream(
stream,
$("f1"), // reorder and use the original field
$("rowtime").rowtime(), // extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
$("f0").as("name") // reorder and give the original field a better name
);
2. Reference input fields by position: In this mode, fields are simply renamed. Event-time
attributes can replace the field on their position in the input data (if it is of correct
type) or be appended at the end. Proctime attributes must be appended at the end. This mode
can only be used if the input type has a defined field order (tuple, case class, Row) and
none of the fields
references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
Table table = tableEnv.fromDataStream(
stream,
$("a"), // rename the first field to 'a'
$("b"), // rename the second field to 'b'
$("rowtime").rowtime() // extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
);
fromDataStream
in interface StreamTableEnvironment
T
- The type of the DataStream
.dataStream
- The DataStream
to be converted.fields
- The fields expressions to map original fields of the DataStream to the fields
of the Table
.Table
.public <T> void registerDataStream(String name, DataStream<T> dataStream)
StreamTableEnvironment
DataStream
. Registered views can be referenced in SQL
queries.
The field names of the Table
are automatically derived from the type of the DataStream
.
The view is registered in the namespace of the current catalog and database. To register
the view in a different catalog use StreamTableEnvironment.createTemporaryView(String, DataStream)
.
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
registerDataStream
in interface StreamTableEnvironment
T
- The type of the DataStream
to register.name
- The name under which the DataStream
is registered in the catalog.dataStream
- The DataStream
to register.public <T> void registerDataStream(String name, DataStream<T> dataStream, String fields)
StreamTableEnvironment
DataStream
in a given path with specified field names.
Registered views can be referenced in SQL queries.
There are two modes for mapping original fields to the fields of the View:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// reorder the fields, rename the original 'f0' field to 'name' and add event-time
// attribute named 'rowtime'
tableEnv.registerDataStream("myTable", stream, "f1, rowtime.rowtime, f0 as 'name'");
2. Reference input fields by position: In this mode, fields are simply renamed. Event-time
attributes can replace the field on their position in the input data (if it is of correct
type) or be appended at the end. Proctime attributes must be appended at the end. This mode
can only be used if the input type has a defined field order (tuple, case class, Row) and
none of the fields
references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// rename the original fields to 'a' and 'b' and extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
tableEnv.registerDataStream("myTable", stream, "a, b, rowtime.rowtime");
The view is registered in the namespace of the current catalog and database. To register
the view in a different catalog use StreamTableEnvironment.createTemporaryView(String, DataStream)
.
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
registerDataStream
in interface StreamTableEnvironment
T
- The type of the DataStream
to register.name
- The name under which the DataStream
is registered in the catalog.dataStream
- The DataStream
to register.fields
- The fields expressions to map original fields of the DataStream to the fields
of the View.public <T> void createTemporaryView(String path, DataStream<T> dataStream, String fields)
StreamTableEnvironment
DataStream
in a given path with specified field names.
Registered views can be referenced in SQL queries.
There are two modes for mapping original fields to the fields of the View:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// reorder the fields, rename the original 'f0' field to 'name' and add event-time
// attribute named 'rowtime'
tableEnv.createTemporaryView("cat.db.myTable", stream, "f1, rowtime.rowtime, f0 as 'name'");
2. Reference input fields by position: In this mode, fields are simply renamed. Event-time
attributes can replace the field on their position in the input data (if it is of correct
type) or be appended at the end. Proctime attributes must be appended at the end. This mode
can only be used if the input type has a defined field order (tuple, case class, Row) and
none of the fields
references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// rename the original fields to 'a' and 'b' and extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
tableEnv.createTemporaryView("cat.db.myTable", stream, "a, b, rowtime.rowtime");
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
createTemporaryView
in interface StreamTableEnvironment
T
- The type of the DataStream
.path
- The path under which the DataStream
is created. See also the TableEnvironment
class description for the format of the path.dataStream
- The DataStream
out of which to create the view.fields
- The fields expressions to map original fields of the DataStream to the fields
of the View.public <T> void createTemporaryView(String path, DataStream<T> dataStream, Expression... fields)
StreamTableEnvironment
DataStream
in a given path with specified field names.
Registered views can be referenced in SQL queries.
There are two modes for mapping original fields to the fields of the View:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
tableEnv.createTemporaryView(
"cat.db.myTable",
stream,
$("f1"), // reorder and use the original field
$("rowtime").rowtime(), // extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
$("f0").as("name") // reorder and give the original field a better name
);
2. Reference input fields by position: In this mode, fields are simply renamed. Event-time
attributes can replace the field on their position in the input data (if it is of correct
type) or be appended at the end. Proctime attributes must be appended at the end. This mode
can only be used if the input type has a defined field order (tuple, case class, Row) and
none of the fields
references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
tableEnv.createTemporaryView(
"cat.db.myTable",
stream,
$("a"), // rename the first field to 'a'
$("b"), // rename the second field to 'b'
$("rowtime").rowtime() // adds an event-time attribute named 'rowtime'
);
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
createTemporaryView
in interface StreamTableEnvironment
T
- The type of the DataStream
.path
- The path under which the DataStream
is created. See also the TableEnvironment
class description for the format of the path.dataStream
- The DataStream
out of which to create the view.fields
- The fields expressions to map original fields of the DataStream to the fields
of the View.protected QueryOperation qualifyQueryOperation(ObjectIdentifier identifier, QueryOperation queryOperation)
TableEnvironmentImpl
fromDataStream(DataStream)
. But the identifier is required when converting this
QueryOperation to RelNode.qualifyQueryOperation
in class TableEnvironmentImpl
public <T> DataStream<T> toAppendStream(Table table, Class<T> clazz)
StreamTableEnvironment
Table
into an append DataStream
of a specified type.
The Table
must only have insert (append) changes. If the Table
is also
modified by update or delete changes, the conversion will fail.
The fields of the Table
are mapped to DataStream
fields as follows:
Row
and Tuple
types: Fields are mapped
by position, field types must match.
DataStream
types: Fields are mapped by field name, field types must match.
toAppendStream
in interface StreamTableEnvironment
T
- The type of the resulting DataStream
.table
- The Table
to convert.clazz
- The class of the type of the resulting DataStream
.DataStream
.public <T> DataStream<T> toAppendStream(Table table, TypeInformation<T> typeInfo)
StreamTableEnvironment
Table
into an append DataStream
of a specified type.
The Table
must only have insert (append) changes. If the Table
is also
modified by update or delete changes, the conversion will fail.
The fields of the Table
are mapped to DataStream
fields as follows:
Row
and Tuple
types: Fields are mapped
by position, field types must match.
DataStream
types: Fields are mapped by field name, field types must match.
toAppendStream
in interface StreamTableEnvironment
T
- The type of the resulting DataStream
.table
- The Table
to convert.typeInfo
- The TypeInformation
that specifies the type of the DataStream
.DataStream
.public <T> DataStream<Tuple2<Boolean,T>> toRetractStream(Table table, Class<T> clazz)
StreamTableEnvironment
Table
into a DataStream
of add and retract messages. The
message will be encoded as Tuple2
. The first field is a Boolean
flag, the
second field holds the record of the specified type T
.
A true Boolean
flag indicates an add message, a false flag indicates a retract
message.
The fields of the Table
are mapped to DataStream
fields as follows:
Row
and Tuple
types: Fields are mapped
by position, field types must match.
DataStream
types: Fields are mapped by field name, field types must match.
toRetractStream
in interface StreamTableEnvironment
T
- The type of the requested record type.table
- The Table
to convert.clazz
- The class of the requested record type.DataStream
.public <T> DataStream<Tuple2<Boolean,T>> toRetractStream(Table table, TypeInformation<T> typeInfo)
StreamTableEnvironment
Table
into a DataStream
of add and retract messages. The
message will be encoded as Tuple2
. The first field is a Boolean
flag, the
second field holds the record of the specified type T
.
A true Boolean
flag indicates an add message, a false flag indicates a retract
message.
The fields of the Table
are mapped to DataStream
fields as follows:
Row
and Tuple
types: Fields are mapped
by position, field types must match.
DataStream
types: Fields are mapped by field name, field types must match.
toRetractStream
in interface StreamTableEnvironment
T
- The type of the requested record type.table
- The Table
to convert.typeInfo
- The TypeInformation
of the requested record type.DataStream
.public StreamTableDescriptor connect(ConnectorDescriptor connectorDescriptor)
TableEnvironment
Descriptors allow for declaring the communication to external systems in an implementation-agnostic way. The classpath is scanned for suitable table factories that match the desired configuration.
The following example shows how to read from a connector using a JSON format and register a temporary table as "MyTable":
tableEnv
.connect(
new ExternalSystemXYZ()
.version("0.11"))
.withFormat(
new Json()
.jsonSchema("{...}")
.failOnMissingField(false))
.withSchema(
new Schema()
.field("user-name", "VARCHAR").from("u_name")
.field("count", "DECIMAL")
.createTemporaryTable("MyTable");
connect
in interface StreamTableEnvironment
connect
in interface TableEnvironment
connect
in class TableEnvironmentImpl
connectorDescriptor
- connector descriptor describing the external system@Internal public StreamExecutionEnvironment execEnv()
public Pipeline getPipeline(String jobName)
protected void validateTableSource(TableSource<?> tableSource)
TableEnvironmentImpl
validateTableSource
in class TableEnvironmentImpl
tableSource
- tableSource to validateCopyright © 2014–2021 The Apache Software Foundation. All rights reserved.