@Internal public abstract class KafkaDynamicSinkBase extends Object implements DynamicTableSink
DynamicTableSink
.
The version-specific Kafka consumers need to extend this class and override createKafkaProducer(String, Properties, SerializationSchema, Optional)
}.
DynamicTableSink.Context, DynamicTableSink.DataStructureConverter, DynamicTableSink.SinkRuntimeProvider
Modifier and Type | Field and Description |
---|---|
protected DataType |
consumedDataType
Consumed data type of the table.
|
protected EncodingFormat<SerializationSchema<RowData>> |
encodingFormat
Sink format for encoding records to Kafka.
|
protected Optional<FlinkKafkaPartitioner<RowData>> |
partitioner
Partitioner to select Kafka partition for each item.
|
protected Properties |
properties
Properties for the Kafka producer.
|
protected String |
topic
The Kafka topic to write to.
|
Modifier | Constructor and Description |
---|---|
protected |
KafkaDynamicSinkBase(DataType consumedDataType,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<RowData>> partitioner,
EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
Modifier and Type | Method and Description |
---|---|
protected abstract SinkFunction<RowData> |
createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner)
Returns the version-specific Kafka producer.
|
boolean |
equals(Object o) |
ChangelogMode |
getChangelogMode(ChangelogMode requestedMode)
Returns the set of changes that the sink accepts during runtime.
|
DynamicTableSink.SinkRuntimeProvider |
getSinkRuntimeProvider(DynamicTableSink.Context context)
Returns a provider of runtime implementation for writing the data.
|
int |
hashCode() |
clone, finalize, getClass, notify, notifyAll, toString, wait, wait, wait
asSummaryString, copy
protected final DataType consumedDataType
protected final String topic
protected final Properties properties
protected final EncodingFormat<SerializationSchema<RowData>> encodingFormat
protected final Optional<FlinkKafkaPartitioner<RowData>> partitioner
protected KafkaDynamicSinkBase(DataType consumedDataType, String topic, Properties properties, Optional<FlinkKafkaPartitioner<RowData>> partitioner, EncodingFormat<SerializationSchema<RowData>> encodingFormat)
public ChangelogMode getChangelogMode(ChangelogMode requestedMode)
DynamicTableSink
The planner can make suggestions but the sink has the final decision what it requires. If
the planner does not support this mode, it will throw an error. For example, the sink can
return that it only supports ChangelogMode.insertOnly()
.
getChangelogMode
in interface DynamicTableSink
requestedMode
- expected set of changes by the current planpublic DynamicTableSink.SinkRuntimeProvider getSinkRuntimeProvider(DynamicTableSink.Context context)
DynamicTableSink
There might exist different interfaces for runtime implementation which is why DynamicTableSink.SinkRuntimeProvider
serves as the base interface. Concrete DynamicTableSink.SinkRuntimeProvider
interfaces might be located in other Flink modules.
Independent of the provider interface, the table runtime expects that a sink
implementation accepts internal data structures (see RowData
for more information).
The given DynamicTableSink.Context
offers utilities by the planner for creating runtime
implementation with minimal dependencies to internal data structures.
See org.apache.flink.table.connector.sink.SinkFunctionProvider
in flink-table-api-java-bridge
.
getSinkRuntimeProvider
in interface DynamicTableSink
protected abstract SinkFunction<RowData> createKafkaProducer(String topic, Properties properties, SerializationSchema<RowData> serializationSchema, Optional<FlinkKafkaPartitioner<RowData>> partitioner)
topic
- Kafka topic to produce to.properties
- Properties for the Kafka producer.serializationSchema
- Serialization schema to use to create Kafka records.partitioner
- Partitioner to select Kafka partition.Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.