Skip navigation links

Back to Flink Website

org.apache.flink.streaming.connectors.kafka

Class FlinkKafkaConsumer08<T>

Offset handling

Offsets whose records have been read and are checkpointed will be committed back to ZooKeeper by the offset handler. In addition, the offset handler finds the point where the source initially starts reading from the stream, when the streaming job is started.

Please note that Flink snapshots the offsets internally as part of its distributed checkpoints. The offsets committed to Kafka / ZooKeeper are only to bring the outside view of progress in sync with Flink's view of the progress. That way, monitoring and other jobs can get a view of how far the Flink Kafka consumer has consumed a topic.

If checkpointing is disabled, the consumer will periodically commit the current offset to Zookeeper.

When using a Kafka topic to send data between Flink jobs, we recommend using the TypeInformationSerializationSchema and TypeInformationKeyValueSerializationSchema.

NOTE: The implementation currently accesses partition metadata when the consumer is constructed. That means that the client that submits the program needs to be able to reach the Kafka brokers or ZooKeeper.

See Also:
Serialized Form
Skip navigation links

Back to Flink Website

Copyright © 2014–2017 The Apache Software Foundation. All rights reserved.