public class FlinkKafkaConsumer09<T> extends FlinkKafkaConsumerBase<T>
The Flink Kafka Consumer participates in checkpointing and guarantees that no data is lost during a failure, and that the computation processes elements "exactly once". (Note: These guarantees naturally assume that Kafka itself does not loose any data.)
Please note that Flink snapshots the offsets internally as part of its distributed checkpoints. The offsets committed to Kafka / ZooKeeper are only to bring the outside view of progress in sync with Flink's view of the progress. That way, monitoring and other jobs can get a view of how far the Flink Kafka consumer has consumed a topic.
Please refer to Kafka's documentation for the available configuration properties: http://kafka.apache.org/documentation.html#newconsumerconfigs
NOTE: The implementation currently accesses partition metadata when the consumer is constructed. That means that the client that submits the program needs to be able to reach the Kafka brokers or ZooKeeper.
SourceFunction.SourceContext<T>
Modifier and Type | Field and Description |
---|---|
static long |
DEFAULT_POLL_TIMEOUT
From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not
available.
|
static String |
KEY_DISABLE_METRICS
Boolean configuration key to disable metrics tracking
|
static String |
KEY_POLL_TIMEOUT
Configuration key to change the polling timeout
|
deserializer, MAX_NUM_PENDING_CHECKPOINTS, offsetsState, pendingCheckpoints, restoreToOffset, running
Constructor and Description |
---|
FlinkKafkaConsumer09(List<String> topics,
DeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x
This constructor allows passing multiple topics to the consumer.
|
FlinkKafkaConsumer09(List<String> topics,
KeyedDeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x
This constructor allows passing multiple topics and a key/value deserialization schema.
|
FlinkKafkaConsumer09(String topic,
DeserializationSchema<T> valueDeserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x
|
FlinkKafkaConsumer09(String topic,
KeyedDeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x
This constructor allows passing a
KeyedDeserializationSchema for reading key/value
pairs, offsets, and topic names from Kafka. |
Modifier and Type | Method and Description |
---|---|
void |
cancel()
Cancels the source.
|
void |
close()
Tear-down method for the user code.
|
protected void |
commitOffsets(HashMap<KafkaTopicPartition,Long> checkpointOffsets) |
static Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> |
convertToCommitMap(HashMap<KafkaTopicPartition,Long> checkpointOffsets) |
static List<KafkaTopicPartition> |
convertToFlinkKafkaTopicPartition(List<org.apache.kafka.common.PartitionInfo> partitions)
Converts a list of Kafka PartitionInfo's to Flink's KafkaTopicPartition (which are serializable)
|
static List<org.apache.kafka.common.TopicPartition> |
convertToKafkaTopicPartition(List<KafkaTopicPartition> partitions) |
void |
open(Configuration parameters)
Initialization method for the function.
|
void |
run(SourceFunction.SourceContext<T> sourceContext)
Starts the source.
|
protected static void |
setDeserializer(Properties props) |
assignPartitions, getProducedType, logPartitionInfo, notifyCheckpointComplete, restoreState, snapshotState
getIterationRuntimeContext, getRuntimeContext, setRuntimeContext
public static final String KEY_POLL_TIMEOUT
public static final String KEY_DISABLE_METRICS
public static final long DEFAULT_POLL_TIMEOUT
public FlinkKafkaConsumer09(String topic, DeserializationSchema<T> valueDeserializer, Properties props)
topic
- The name of the topic that should be consumed.valueDeserializer
- The de-/serializer used to convert between Kafka's byte messages and Flink's objects.props
- The properties used to configure the Kafka consumer client, and the ZooKeeper client.public FlinkKafkaConsumer09(String topic, KeyedDeserializationSchema<T> deserializer, Properties props)
KeyedDeserializationSchema
for reading key/value
pairs, offsets, and topic names from Kafka.topic
- The name of the topic that should be consumed.deserializer
- The keyed de-/serializer used to convert between Kafka's byte messages and Flink's objects.props
- The properties used to configure the Kafka consumer client, and the ZooKeeper client.public FlinkKafkaConsumer09(List<String> topics, DeserializationSchema<T> deserializer, Properties props)
topics
- The Kafka topics to read from.deserializer
- The de-/serializer used to convert between Kafka's byte messages and Flink's objects.props
- The properties that are used to configure both the fetcher and the offset handler.public FlinkKafkaConsumer09(List<String> topics, KeyedDeserializationSchema<T> deserializer, Properties props)
topics
- The Kafka topics to read from.deserializer
- The keyed de-/serializer used to convert between Kafka's byte messages and Flink's objects.props
- The properties that are used to configure both the fetcher and the offset handler.public static List<KafkaTopicPartition> convertToFlinkKafkaTopicPartition(List<org.apache.kafka.common.PartitionInfo> partitions)
partitions
- A list of Kafka PartitionInfos.public static List<org.apache.kafka.common.TopicPartition> convertToKafkaTopicPartition(List<KafkaTopicPartition> partitions)
public void open(Configuration parameters) throws Exception
RichFunction
The configuration object passed to the function can be used for configuration and initialization. The configuration contains all parameters that were configured on the function in the program composition.
public class MyMapper extends FilterFunction<String> {
private String searchString;
public void open(Configuration parameters) {
this.searchString = parameters.getString("foo");
}
public boolean filter(String value) {
return value.equals(searchString);
}
}
By default, this method does nothing.
open
in interface RichFunction
open
in class AbstractRichFunction
parameters
- The configuration containing the parameters attached to the contract.Exception
- Implementations may forward exceptions, which are caught by the runtime. When the
runtime catches an exception, it aborts the task and lets the fail-over logic
decide whether to retry the task execution.Configuration
public void run(SourceFunction.SourceContext<T> sourceContext) throws Exception
SourceFunction
SourceFunction.SourceContext
emit
elements.
Sources that implement Checkpointed
must lock on the checkpoint lock (using a synchronized block) before updating internal
state and emitting elements, to make both an atomic operation:
public class ExampleSource<T> implements SourceFunction<T>, Checkpointed<Long> {
private long count = 0L;
private volatile boolean isRunning = true;
public void run(SourceContext<T> ctx) {
while (isRunning && count < 1000) {
synchronized (ctx.getCheckpointLock()) {
ctx.collect(count);
count++;
}
}
}
public void cancel() {
isRunning = false;
}
public Long snapshotState(long checkpointId, long checkpointTimestamp) { return count; }
public void restoreState(Long state) { this.count = state; }
}
sourceContext
- The context to emit elements to and for accessing locks.Exception
public void cancel()
SourceFunction
SourceFunction.run(SourceContext)
method. The implementation needs to ensure that the
source will break out of that loop after this method is called.
A typical pattern is to have an "volatile boolean isRunning"
flag that is set to
false
in this method. That flag is checked in the loop condition.
When a source is canceled, the executing thread will also be interrupted
(via Thread.interrupt()
). The interruption happens strictly after this
method has been called, so any interruption handler can rely on the fact that
this method has completed. It is good practice to make any flags altered by
this method "volatile", in order to guarantee the visibility of the effects of
this method to any interruption handler.
public void close() throws Exception
RichFunction
This method can be used for clean up work.
close
in interface RichFunction
close
in class AbstractRichFunction
Exception
- Implementations may forward exceptions, which are caught by the runtime. When the
runtime catches an exception, it aborts the task and lets the fail-over logic
decide whether to retry the task execution.protected void commitOffsets(HashMap<KafkaTopicPartition,Long> checkpointOffsets)
commitOffsets
in class FlinkKafkaConsumerBase<T>
public static Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> convertToCommitMap(HashMap<KafkaTopicPartition,Long> checkpointOffsets)
protected static void setDeserializer(Properties props)
Copyright © 2014–2017 The Apache Software Foundation. All rights reserved.