public class FlinkKafkaConsumer08<T> extends FlinkKafkaConsumerBase<T>
The Flink Kafka Consumer participates in checkpointing and guarantees that no data is lost during a failure, and that the computation processes elements "exactly once". (Note: These guarantees naturally assume that Kafka itself does not loose any data.)
Flink's Kafka Consumer is designed to be compatible with Kafka's High-Level Consumer API (0.8.x). Most of Kafka's configuration variables can be used with this consumer as well:
Offsets whose records have been read and are checkpointed will be committed back to ZooKeeper by the offset handler. In addition, the offset handler finds the point where the source initially starts reading from the stream, when the streaming job is started.
Please note that Flink snapshots the offsets internally as part of its distributed checkpoints. The offsets committed to Kafka / ZooKeeper are only to bring the outside view of progress in sync with Flink's view of the progress. That way, monitoring and other jobs can get a view of how far the Flink Kafka consumer has consumed a topic.
If checkpointing is disabled, the consumer will periodically commit the current offset to Zookeeper.
When using a Kafka topic to send data between Flink jobs, we recommend using the
TypeInformationSerializationSchema
and TypeInformationKeyValueSerializationSchema
.
NOTE: The implementation currently accesses partition metadata when the consumer is constructed. That means that the client that submits the program needs to be able to reach the Kafka brokers or ZooKeeper.
SourceFunction.SourceContext<T>
Modifier and Type | Field and Description |
---|---|
static int |
DEFAULT_GET_PARTITIONS_RETRIES
Default number of retries for getting the partition info.
|
static String |
GET_PARTITIONS_RETRIES_KEY
Configuration key for the number of retries for getting the partition info
|
static long |
OFFSET_NOT_SET
Magic number to define an unset offset.
|
deserializer, MAX_NUM_PENDING_CHECKPOINTS, offsetsState, pendingCheckpoints, restoreToOffset, running
Constructor and Description |
---|
FlinkKafkaConsumer08(List<String> topics,
DeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.8.x
This constructor allows passing multiple topics to the consumer.
|
FlinkKafkaConsumer08(List<String> topics,
KeyedDeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.8.x
This constructor allows passing multiple topics and a key/value deserialization schema.
|
FlinkKafkaConsumer08(String topic,
DeserializationSchema<T> valueDeserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.8.x
|
FlinkKafkaConsumer08(String topic,
KeyedDeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.8.x
This constructor allows passing a
KeyedDeserializationSchema for reading key/value
pairs, offsets, and topic names from Kafka. |
Modifier and Type | Method and Description |
---|---|
void |
cancel()
Cancels the source.
|
void |
close()
Tear-down method for the user code.
|
protected void |
commitOffsets(HashMap<KafkaTopicPartition,Long> toCommit)
Utility method to commit offsets.
|
static List<KafkaTopicPartitionLeader> |
getPartitionsForTopic(List<String> topics,
Properties properties)
Send request to Kafka to get partitions for topic.
|
void |
open(Configuration parameters)
Initialization method for the function.
|
void |
run(SourceFunction.SourceContext<T> sourceContext)
Starts the source.
|
protected static void |
validateZooKeeperConfig(Properties props)
Validate the ZK configuration, checking for required parameters
|
assignPartitions, getProducedType, logPartitionInfo, notifyCheckpointComplete, restoreState, snapshotState
getIterationRuntimeContext, getRuntimeContext, setRuntimeContext
public static final long OFFSET_NOT_SET
public static final String GET_PARTITIONS_RETRIES_KEY
public static final int DEFAULT_GET_PARTITIONS_RETRIES
public FlinkKafkaConsumer08(String topic, DeserializationSchema<T> valueDeserializer, Properties props)
topic
- The name of the topic that should be consumed.valueDeserializer
- The de-/serializer used to convert between Kafka's byte messages and Flink's objects.props
- The properties used to configure the Kafka consumer client, and the ZooKeeper client.public FlinkKafkaConsumer08(String topic, KeyedDeserializationSchema<T> deserializer, Properties props)
KeyedDeserializationSchema
for reading key/value
pairs, offsets, and topic names from Kafka.topic
- The name of the topic that should be consumed.deserializer
- The keyed de-/serializer used to convert between Kafka's byte messages and Flink's objects.props
- The properties used to configure the Kafka consumer client, and the ZooKeeper client.public FlinkKafkaConsumer08(List<String> topics, DeserializationSchema<T> deserializer, Properties props)
topics
- The Kafka topics to read from.deserializer
- The de-/serializer used to convert between Kafka's byte messages and Flink's objects.props
- The properties that are used to configure both the fetcher and the offset handler.public FlinkKafkaConsumer08(List<String> topics, KeyedDeserializationSchema<T> deserializer, Properties props)
topics
- The Kafka topics to read from.deserializer
- The keyed de-/serializer used to convert between Kafka's byte messages and Flink's objects.props
- The properties that are used to configure both the fetcher and the offset handler.public void open(Configuration parameters) throws Exception
RichFunction
The configuration object passed to the function can be used for configuration and initialization. The configuration contains all parameters that were configured on the function in the program composition.
public class MyMapper extends FilterFunction<String> {
private String searchString;
public void open(Configuration parameters) {
this.searchString = parameters.getString("foo");
}
public boolean filter(String value) {
return value.equals(searchString);
}
}
By default, this method does nothing.
open
in interface RichFunction
open
in class AbstractRichFunction
parameters
- The configuration containing the parameters attached to the contract.Exception
- Implementations may forward exceptions, which are caught by the runtime. When the
runtime catches an exception, it aborts the task and lets the fail-over logic
decide whether to retry the task execution.Configuration
public void run(SourceFunction.SourceContext<T> sourceContext) throws Exception
SourceFunction
SourceFunction.SourceContext
emit
elements.
Sources that implement Checkpointed
must lock on the checkpoint lock (using a synchronized block) before updating internal
state and emitting elements, to make both an atomic operation:
public class ExampleSource<T> implements SourceFunction<T>, Checkpointed<Long> {
private long count = 0L;
private volatile boolean isRunning = true;
public void run(SourceContext<T> ctx) {
while (isRunning && count < 1000) {
synchronized (ctx.getCheckpointLock()) {
ctx.collect(count);
count++;
}
}
}
public void cancel() {
isRunning = false;
}
public Long snapshotState(long checkpointId, long checkpointTimestamp) { return count; }
public void restoreState(Long state) { this.count = state; }
}
sourceContext
- The context to emit elements to and for accessing locks.Exception
public void cancel()
SourceFunction
SourceFunction.run(SourceContext)
method. The implementation needs to ensure that the
source will break out of that loop after this method is called.
A typical pattern is to have an "volatile boolean isRunning"
flag that is set to
false
in this method. That flag is checked in the loop condition.
When a source is canceled, the executing thread will also be interrupted
(via Thread.interrupt()
). The interruption happens strictly after this
method has been called, so any interruption handler can rely on the fact that
this method has completed. It is good practice to make any flags altered by
this method "volatile", in order to guarantee the visibility of the effects of
this method to any interruption handler.
public void close() throws Exception
RichFunction
This method can be used for clean up work.
close
in interface RichFunction
close
in class AbstractRichFunction
Exception
- Implementations may forward exceptions, which are caught by the runtime. When the
runtime catches an exception, it aborts the task and lets the fail-over logic
decide whether to retry the task execution.protected void commitOffsets(HashMap<KafkaTopicPartition,Long> toCommit) throws Exception
commitOffsets
in class FlinkKafkaConsumerBase<T>
toCommit
- the offsets to commitException
public static List<KafkaTopicPartitionLeader> getPartitionsForTopic(List<String> topics, Properties properties)
topics
- The name of the topics.properties
- The properties for the Kafka Consumer that is used to query the partitions for the topic.protected static void validateZooKeeperConfig(Properties props)
props
- Properties to checkCopyright © 2014–2017 The Apache Software Foundation. All rights reserved.