@PublicEvolving public class FlinkKafkaProducer010<T> extends FlinkKafkaProducer09<T>
Modifier and Type | Class and Description |
---|---|
static class |
FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T>
Deprecated.
This class is deprecated since the factory methods
writeToKafkaWithTimestamps
for the producer are also deprecated. |
SinkFunction.Context<T>
asyncException, callback, defaultTopicId, flinkKafkaPartitioner, flushOnCheckpoint, KEY_DISABLE_METRICS, logFailuresOnly, pendingRecords, pendingRecordsLock, producer, producerConfig, schema, topicPartitionsMap
Constructor and Description |
---|
FlinkKafkaProducer010(String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
FlinkKafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
KafkaPartitioner<T> customPartitioner)
Deprecated.
This is a deprecated constructor that does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
FlinkKafkaProducer010(String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig,
FlinkKafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig,
KafkaPartitioner<T> customPartitioner)
Deprecated.
This is a deprecated since it does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
FlinkKafkaProducer010(String brokerList,
String topicId,
KeyedSerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String brokerList,
String topicId,
SerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
Modifier and Type | Method and Description |
---|---|
void |
invoke(T value,
SinkFunction.Context context)
Called when new data arrives to the sink, and forwards it to Kafka.
|
void |
setWriteTimestampToKafka(boolean writeTimestampToKafka)
If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
Deprecated.
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
FlinkKafkaPartitioner<T> customPartitioner)
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
KafkaPartitioner<T> customPartitioner)
Deprecated.
This is a deprecated since it does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig)
Deprecated.
|
flush
checkErroneous, close, getKafkaProducer, getPartitionsByTopic, getPropertiesFromBrokerList, initializeState, numPendingRecords, open, setFlushOnCheckpoint, setLogFailuresOnly, snapshotState
getIterationRuntimeContext, getRuntimeContext, setRuntimeContext
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
invoke
public FlinkKafkaProducer010(String brokerList, String topicId, SerializationSchema<T> serializationSchema)
Using this constructor, the default FlinkFixedPartitioner
will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner)
instead.
brokerList
- Comma separated addresses of the brokerstopicId
- ID of the Kafka topic.serializationSchema
- User defined key-less serialization schema.public FlinkKafkaProducer010(String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig)
Using this constructor, the default FlinkFixedPartitioner
will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner)
instead.
topicId
- ID of the Kafka topic.serializationSchema
- User defined key-less serialization schema.producerConfig
- Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<T> customPartitioner)
SerializationSchema
and possibly a custom FlinkKafkaPartitioner
.
Since a key-less SerializationSchema
is used, all records sent to Kafka will not have an
attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka
partitions in a round-robin fashion.
topicId
- The topic to write data toserializationSchema
- A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig
- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner
- A serializable partitioner for assigning messages to Kafka partitions.
If set to null
, records will be distributed to Kafka partitions
in a round-robin fashion.public FlinkKafkaProducer010(String brokerList, String topicId, KeyedSerializationSchema<T> serializationSchema)
Using this constructor, the default FlinkFixedPartitioner
will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties, FlinkKafkaPartitioner)
instead.
brokerList
- Comma separated addresses of the brokerstopicId
- ID of the Kafka topic.serializationSchema
- User defined serialization schema supporting key/value messagespublic FlinkKafkaProducer010(String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
Using this constructor, the default FlinkFixedPartitioner
will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties, FlinkKafkaPartitioner)
instead.
topicId
- ID of the Kafka topic.serializationSchema
- User defined serialization schema supporting key/value messagesproducerConfig
- Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<T> customPartitioner)
KeyedSerializationSchema
and possibly a custom FlinkKafkaPartitioner
.
If a partitioner is not provided, written records will be partitioned by the attached key of each
record (as determined by KeyedSerializationSchema.serializeKey(Object)
). If written records do not
have a key (i.e., KeyedSerializationSchema.serializeKey(Object)
returns null
), they
will be distributed to Kafka partitions in a round-robin fashion.
topicId
- The topic to write data toserializationSchema
- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig
- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner
- A serializable partitioner for assigning messages to Kafka partitions.
If set to null
, records will be partitioned by the key of each record
(determined by KeyedSerializationSchema.serializeKey(Object)
). If the keys
are null
, then records will be distributed to Kafka partitions in a
round-robin fashion.@Deprecated public FlinkKafkaProducer010(String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig, KafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner)
instead.topicId
- The topic to write data toserializationSchema
- A (keyless) serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig
- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner
- A serializable partitioner for assigning messages to Kafka partitions (when passing null, we'll use Kafka's partitioner)@Deprecated public FlinkKafkaProducer010(String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, KafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner)
instead.This constructor does not allow writing timestamps to Kafka, it follow approach (a) (see above)
public void setWriteTimestampToKafka(boolean writeTimestampToKafka)
writeTimestampToKafka
- Flag indicating if Flink's internal timestamps are written to Kafka.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties)
and call setWriteTimestampToKafka(boolean)
.This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream
- The stream to write to KafkatopicId
- ID of the Kafka topic.serializationSchema
- User defined serialization schema supporting key/value messagesproducerConfig
- Properties with the producer configuration.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig)
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties)
and call setWriteTimestampToKafka(boolean)
.This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream
- The stream to write to KafkatopicId
- ID of the Kafka topic.serializationSchema
- User defined (keyless) serialization schema.producerConfig
- Properties with the producer configuration.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties, FlinkKafkaPartitioner)
and call setWriteTimestampToKafka(boolean)
.This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream
- The stream to write to KafkatopicId
- The name of the target topicserializationSchema
- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig
- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner
- A serializable partitioner for assigning messages to Kafka partitions.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, KafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner)
instead.This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream
- The stream to write to KafkatopicId
- The name of the target topicserializationSchema
- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig
- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner
- A serializable partitioner for assigning messages to Kafka partitions.public void invoke(T value, SinkFunction.Context context) throws Exception
FlinkKafkaProducerBase
invoke
in interface SinkFunction<T>
invoke
in class FlinkKafkaProducerBase<T>
value
- The incoming datacontext
- Additional context about the input record.Exception
- This method may throw exceptions. Throwing an exception will cause the operation
to fail and may trigger recovery.Copyright © 2014–2019 The Apache Software Foundation. All rights reserved.