@Internal public abstract class KafkaTableSink extends Object implements org.apache.flink.table.sinks.AppendStreamTableSink<Row>
AppendStreamTableSink
.
The version-specific Kafka consumers need to extend this class and
override createKafkaProducer(String, Properties, SerializationSchema, FlinkKafkaPartitioner)
}.
Modifier and Type | Field and Description |
---|---|
protected String[] |
fieldNames |
protected TypeInformation[] |
fieldTypes |
protected FlinkKafkaPartitioner<Row> |
partitioner |
protected Properties |
properties |
protected SerializationSchema<Row> |
serializationSchema |
protected String |
topic |
Constructor and Description |
---|
KafkaTableSink(String topic,
Properties properties,
FlinkKafkaPartitioner<Row> partitioner)
Creates KafkaTableSink.
|
Modifier and Type | Method and Description |
---|---|
KafkaTableSink |
configure(String[] fieldNames,
TypeInformation<?>[] fieldTypes) |
protected abstract KafkaTableSink |
createCopy()
Create a deep copy of this sink.
|
protected abstract FlinkKafkaProducerBase<Row> |
createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner)
Returns the version-specific Kafka producer.
|
protected abstract SerializationSchema<Row> |
createSerializationSchema(RowTypeInfo rowSchema)
Create serialization schema for converting table rows into bytes.
|
void |
emitDataStream(DataStream<Row> dataStream) |
String[] |
getFieldNames() |
TypeInformation<?>[] |
getFieldTypes() |
TypeInformation<Row> |
getOutputType() |
protected final String topic
protected final Properties properties
protected SerializationSchema<Row> serializationSchema
protected final FlinkKafkaPartitioner<Row> partitioner
protected String[] fieldNames
protected TypeInformation[] fieldTypes
public KafkaTableSink(String topic, Properties properties, FlinkKafkaPartitioner<Row> partitioner)
topic
- Kafka topic to write to.properties
- Properties for the Kafka consumer.partitioner
- Partitioner to select Kafka partition for each itemprotected abstract FlinkKafkaProducerBase<Row> createKafkaProducer(String topic, Properties properties, SerializationSchema<Row> serializationSchema, FlinkKafkaPartitioner<Row> partitioner)
topic
- Kafka topic to produce to.properties
- Properties for the Kafka producer.serializationSchema
- Serialization schema to use to create Kafka records.partitioner
- Partitioner to select Kafka partition.protected abstract SerializationSchema<Row> createSerializationSchema(RowTypeInfo rowSchema)
rowSchema
- the schema of the row to serialize.protected abstract KafkaTableSink createCopy()
public void emitDataStream(DataStream<Row> dataStream)
emitDataStream
in interface org.apache.flink.table.sinks.AppendStreamTableSink<Row>
public TypeInformation<Row> getOutputType()
getOutputType
in interface org.apache.flink.table.sinks.TableSink<Row>
public String[] getFieldNames()
getFieldNames
in interface org.apache.flink.table.sinks.TableSink<Row>
public TypeInformation<?>[] getFieldTypes()
getFieldTypes
in interface org.apache.flink.table.sinks.TableSink<Row>
public KafkaTableSink configure(String[] fieldNames, TypeInformation<?>[] fieldTypes)
configure
in interface org.apache.flink.table.sinks.TableSink<Row>
Copyright © 2014–2019 The Apache Software Foundation. All rights reserved.