Modifier and Type | Method and Description |
---|---|
TableSchema |
HBaseUpsertTableSink.getTableSchema() |
TableSchema |
HBaseTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
JDBCTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
JDBCTableSource.Builder |
JDBCTableSource.Builder.setSchema(TableSchema schema)
required, table schema of this table source.
|
JDBCUpsertTableSink.Builder |
JDBCUpsertTableSink.Builder.setTableSchema(TableSchema schema)
required, table schema of this table source.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
HiveTableSource.getTableSchema() |
TableSchema |
HiveTableSink.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
ParquetTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
OrcTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
StreamSQLTestProgram.GeneratorTableSource.getTableSchema() |
TableSchema |
BatchSQLTestProgram.GeneratorTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
protected abstract ElasticsearchUpsertTableSinkBase |
ElasticsearchUpsertTableSinkBase.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected abstract ElasticsearchUpsertTableSinkBase |
ElasticsearchUpsertTableSinkFactoryBase.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
ElasticsearchUpsertTableSinkBase(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
Modifier and Type | Method and Description |
---|---|
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch6UpsertTableSink.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch6UpsertTableSinkFactory.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
Elasticsearch6UpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
KafkaTableSourceBase.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
protected KafkaTableSinkBase |
Kafka08TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
KafkaTableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka011TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka010TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka09TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected abstract KafkaTableSinkBase |
KafkaTableSourceSinkFactoryBase.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema)
Constructs the version-specific Kafka table sink.
|
protected KafkaTableSourceBase |
Kafka08TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
KafkaTableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
Kafka011TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
Kafka010TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
Kafka09TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected abstract KafkaTableSourceBase |
KafkaTableSourceSinkFactoryBase.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets)
Constructs the version-specific Kafka table source.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
TableSchema.Builder.build()
Returns a
TableSchema instance. |
TableSchema |
TableSchema.copy()
Returns a deep copy of the table schema.
|
static TableSchema |
TableSchema.fromTypeInfo(TypeInformation<?> typeInfo)
Deprecated.
This method will be removed soon. Use
DataTypes to declare types. |
TableSchema |
Table.getSchema()
Returns the schema of this table.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
TableImpl.getSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
AbstractCatalogTable.getSchema() |
TableSchema |
AbstractCatalogView.getSchema() |
TableSchema |
CatalogBaseTable.getSchema()
Get the schema of the table.
|
TableSchema |
CatalogManager.ResolvedTable.getTableSchema() |
Constructor and Description |
---|
AbstractCatalogTable(TableSchema tableSchema,
List<String> partitionKeys,
Map<String,String> properties,
String comment) |
AbstractCatalogTable(TableSchema tableSchema,
Map<String,String> properties,
String comment) |
AbstractCatalogView(String originalQuery,
String expandedQuery,
TableSchema schema,
Map<String,String> properties,
String comment) |
CatalogTableBuilder(ConnectorDescriptor connectorDescriptor,
TableSchema tableSchema) |
CatalogTableImpl(TableSchema tableSchema,
List<String> partitionKeys,
Map<String,String> properties,
String comment) |
CatalogTableImpl(TableSchema tableSchema,
Map<String,String> properties,
String comment) |
CatalogViewImpl(String originalQuery,
String expandedQuery,
TableSchema schema,
Map<String,String> properties,
String comment) |
ConnectorCatalogTable(TableSource<T1> tableSource,
TableSink<T2> tableSink,
TableSchema tableSchema,
boolean isBatch) |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
HiveTableUtil.createTableSchema(List<org.apache.hadoop.hive.metastore.api.FieldSchema> cols,
List<org.apache.hadoop.hive.metastore.api.FieldSchema> partitionKeys)
Create a Flink's TableSchema from Hive table's columns and partition keys.
|
Modifier and Type | Method and Description |
---|---|
static List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveTableUtil.createHiveColumns(TableSchema schema)
Create Hive columns from Flink TableSchema.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
ResultDescriptor.getResultSchema() |
TableSchema |
Executor.getTableSchema(SessionContext session,
String name)
Returns the schema of a table.
|
Constructor and Description |
---|
ResultDescriptor(String resultId,
TableSchema resultSchema,
boolean isMaterialized) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
CollectBatchTableSink.getTableSchema() |
TableSchema |
CollectStreamTableSink.getTableSchema() |
TableSchema |
LocalExecutor.getTableSchema(SessionContext session,
String name) |
Modifier and Type | Method and Description |
---|---|
<T> DynamicResult<T> |
ResultStore.createResult(Environment env,
TableSchema schema,
ExecutionConfig config)
Creates a result.
|
Constructor and Description |
---|
CollectBatchTableSink(String accumulatorName,
TypeSerializer<Row> serializer,
TableSchema tableSchema) |
CollectStreamTableSink(InetAddress targetAddress,
int targetPort,
TypeSerializer<Tuple2<Boolean,Row>> serializer,
TableSchema tableSchema) |
Constructor and Description |
---|
ChangelogCollectStreamResult(RowTypeInfo outputType,
TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort) |
CollectStreamResult(RowTypeInfo outputType,
TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort) |
MaterializedCollectBatchResult(TableSchema tableSchema,
RowTypeInfo outputType,
ExecutionConfig config) |
MaterializedCollectStreamResult(RowTypeInfo outputType,
TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
int maxRowCount) |
MaterializedCollectStreamResult(RowTypeInfo outputType,
TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
int maxRowCount,
int overcommitThreshold) |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
SchemaValidator.deriveTableSinkSchema(DescriptorProperties properties)
Deprecated.
This method combines two separate concepts of table schema and field mapping.
This should be split into two methods once we have support for
the corresponding interfaces (see FLINK-9870).
|
TableSchema |
DescriptorProperties.getTableSchema(String key)
Returns a table schema under the given existing key.
|
Modifier and Type | Method and Description |
---|---|
Optional<TableSchema> |
DescriptorProperties.getOptionalTableSchema(String key)
Returns a table schema under the given key if it exists.
|
Modifier and Type | Method and Description |
---|---|
void |
DescriptorProperties.putTableSchema(String key,
TableSchema schema)
Adds a table schema under the given key.
|
OldCsv |
OldCsv.schema(TableSchema schema)
Deprecated.
Sets the format schema with field names and the types.
|
Schema |
Schema.schema(TableSchema schema)
Sets the schema with field names and the types.
|
Modifier and Type | Method and Description |
---|---|
static TableSchema |
TableFormatFactoryBase.deriveSchema(Map<String,String> properties)
Finds the table schema that can be used for a format schema (without time attributes).
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
ScalaDataStreamQueryOperation.getTableSchema() |
TableSchema |
JavaDataStreamQueryOperation.getTableSchema() |
TableSchema |
DataSetQueryOperation.getTableSchema() |
TableSchema |
DistinctQueryOperation.getTableSchema() |
TableSchema |
FilterQueryOperation.getTableSchema() |
TableSchema |
SortQueryOperation.getTableSchema() |
TableSchema |
TableSourceQueryOperation.getTableSchema() |
TableSchema |
JoinQueryOperation.getTableSchema() |
TableSchema |
CatalogQueryOperation.getTableSchema() |
TableSchema |
WindowAggregateQueryOperation.getTableSchema() |
TableSchema |
ProjectQueryOperation.getTableSchema() |
TableSchema |
AggregateQueryOperation.getTableSchema() |
TableSchema |
SetQueryOperation.getTableSchema() |
TableSchema |
QueryOperation.getTableSchema()
Resolved schema of this operation.
|
TableSchema |
CalculatedQueryOperation.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
DataStreamQueryOperation.getTableSchema() |
TableSchema |
PlannerQueryOperation.getTableSchema() |
Constructor and Description |
---|
DataStreamQueryOperation(DataStream<E> dataStream,
int[] fieldIndices,
TableSchema tableSchema,
boolean[] fieldNullables,
boolean producesUpdates,
boolean isAccRetract,
org.apache.flink.table.planner.plan.stats.FlinkStatistic statistic) |
DataStreamQueryOperation(DataStream<E> dataStream,
int[] fieldIndices,
TableSchema tableSchema,
boolean[] fieldNullables,
org.apache.flink.table.planner.plan.stats.FlinkStatistic statistic) |
Modifier and Type | Method and Description |
---|---|
default TableSchema |
TableSink.getTableSchema()
Returns the schema of the consumed table.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
CsvTableSource.getTableSchema() |
TableSchema |
TableSource.getTableSchema()
Returns the schema of the produced table.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
FieldInfoUtils.TypeInfoSchema.toTableSchema() |
Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.