@Internal public abstract class KafkaDynamicSourceBase extends Object implements ScanTableSource
ScanTableSource
.
The version-specific Kafka consumers need to extend this class and override createKafkaConsumer(String, Properties, DeserializationSchema)
}.
ScanTableSource.ScanContext, ScanTableSource.ScanRuntimeProvider
DynamicTableSource.Context, DynamicTableSource.DataStructureConverter
Modifier and Type | Field and Description |
---|---|
protected DecodingFormat<DeserializationSchema<RowData>> |
decodingFormat
Scan format for decoding records from Kafka.
|
protected DataType |
outputDataType |
protected Properties |
properties
Properties for the Kafka consumer.
|
protected Map<KafkaTopicPartition,Long> |
specificStartupOffsets
Specific startup offsets; only relevant when startup mode is
StartupMode.SPECIFIC_OFFSETS . |
protected StartupMode |
startupMode
The startup mode for the contained consumer (default is
StartupMode.GROUP_OFFSETS ). |
protected long |
startupTimestampMillis
The start timestamp to locate partition offsets; only relevant when startup mode is
StartupMode.TIMESTAMP . |
protected String |
topic
The Kafka topic to consume.
|
Modifier | Constructor and Description |
---|---|
protected |
KafkaDynamicSourceBase(DataType outputDataType,
String topic,
Properties properties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis)
Creates a generic Kafka
StreamTableSource . |
Modifier and Type | Method and Description |
---|---|
protected abstract FlinkKafkaConsumerBase<RowData> |
createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema)
Creates a version-specific Kafka consumer.
|
boolean |
equals(Object o) |
ChangelogMode |
getChangelogMode()
Returns the set of changes that the planner can expect during runtime.
|
protected FlinkKafkaConsumerBase<RowData> |
getKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema)
Returns a version-specific Kafka consumer with the start position configured.
|
ScanTableSource.ScanRuntimeProvider |
getScanRuntimeProvider(ScanTableSource.ScanContext runtimeProviderContext)
Returns a provider of runtime implementation for reading the data.
|
int |
hashCode() |
clone, finalize, getClass, notify, notifyAll, toString, wait, wait, wait
asSummaryString, copy
protected final DataType outputDataType
protected final DecodingFormat<DeserializationSchema<RowData>> decodingFormat
protected final String topic
protected final Properties properties
protected final StartupMode startupMode
StartupMode.GROUP_OFFSETS
).protected final Map<KafkaTopicPartition,Long> specificStartupOffsets
StartupMode.SPECIFIC_OFFSETS
.protected final long startupTimestampMillis
StartupMode.TIMESTAMP
.protected KafkaDynamicSourceBase(DataType outputDataType, String topic, Properties properties, DecodingFormat<DeserializationSchema<RowData>> decodingFormat, StartupMode startupMode, Map<KafkaTopicPartition,Long> specificStartupOffsets, long startupTimestampMillis)
StreamTableSource
.outputDataType
- Source produced data typetopic
- Kafka topic to consume.properties
- Properties for the Kafka consumer.decodingFormat
- Decoding format for decoding records from Kafka.startupMode
- Startup mode for the contained consumer.specificStartupOffsets
- Specific startup offsets; only relevant when startup mode is
StartupMode.SPECIFIC_OFFSETS
.startupTimestampMillis
- Startup timestamp for offsets; only relevant when startup mode
is StartupMode.TIMESTAMP
.public ChangelogMode getChangelogMode()
ScanTableSource
getChangelogMode
in interface ScanTableSource
RowKind
public ScanTableSource.ScanRuntimeProvider getScanRuntimeProvider(ScanTableSource.ScanContext runtimeProviderContext)
ScanTableSource
There might exist different interfaces for runtime implementation which is why ScanTableSource.ScanRuntimeProvider
serves as the base interface. Concrete ScanTableSource.ScanRuntimeProvider
interfaces might be located in other Flink modules.
Independent of the provider interface, the table runtime expects that a source
implementation emits internal data structures (see RowData
for more information).
The given ScanTableSource.ScanContext
offers utilities by the planner for creating runtime
implementation with minimal dependencies to internal data structures.
See org.apache.flink.table.connector.source.SourceFunctionProvider
in flink-table-api-java-bridge
.
getScanRuntimeProvider
in interface ScanTableSource
protected abstract FlinkKafkaConsumerBase<RowData> createKafkaConsumer(String topic, Properties properties, DeserializationSchema<RowData> deserializationSchema)
topic
- Kafka topic to consume.properties
- Properties for the Kafka consumer.deserializationSchema
- Deserialization schema to use for Kafka records.protected FlinkKafkaConsumerBase<RowData> getKafkaConsumer(String topic, Properties properties, DeserializationSchema<RowData> deserializationSchema)
topic
- Kafka topic to consume.properties
- Properties for the Kafka consumer.deserializationSchema
- Deserialization schema to use for Kafka records.Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.