Modifier and Type | Method and Description |
---|---|
protected Map<KafkaTopicPartition,Long> |
FlinkKafkaConsumer.fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions,
long timestamp)
Deprecated.
|
protected abstract Map<KafkaTopicPartition,Long> |
FlinkKafkaConsumerBase.fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions,
long timestamp) |
Modifier and Type | Method and Description |
---|---|
protected AbstractFetcher<T,?> |
FlinkKafkaConsumer.createFetcher(SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets,
SerializedValue<WatermarkStrategy<T>> watermarkStrategy,
StreamingRuntimeContext runtimeContext,
OffsetCommitMode offsetCommitMode,
MetricGroup consumerMetricGroup,
boolean useMetrics)
Deprecated.
|
protected abstract AbstractFetcher<T,?> |
FlinkKafkaConsumerBase.createFetcher(SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> subscribedPartitionsToStartOffsets,
SerializedValue<WatermarkStrategy<T>> watermarkStrategy,
StreamingRuntimeContext runtimeContext,
OffsetCommitMode offsetCommitMode,
MetricGroup kafkaMetricGroup,
boolean useMetrics)
Creates the fetcher that connect to the Kafka brokers, pulls data, deserialized the data, and
emits it into the data streams.
|
protected Map<KafkaTopicPartition,Long> |
FlinkKafkaConsumer.fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions,
long timestamp)
Deprecated.
|
protected abstract Map<KafkaTopicPartition,Long> |
FlinkKafkaConsumerBase.fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions,
long timestamp) |
FlinkKafkaConsumerBase<T> |
FlinkKafkaConsumerBase.setStartFromSpecificOffsets(Map<KafkaTopicPartition,Long> specificStartupOffsets)
Specifies the consumer to start reading partitions from specific offsets, set independently
for each partition.
|
Modifier and Type | Method and Description |
---|---|
KafkaTopicPartition |
KafkaTopicPartitionState.getKafkaTopicPartition()
Gets Flink's descriptor for the Kafka Partition.
|
KafkaTopicPartition |
KafkaTopicPartitionLeader.getTopicPartition() |
Modifier and Type | Method and Description |
---|---|
List<KafkaTopicPartition> |
AbstractPartitionDiscoverer.discoverPartitions()
Execute a partition discovery attempt for this subtask.
|
static List<KafkaTopicPartition> |
KafkaTopicPartition.dropLeaderData(List<KafkaTopicPartitionLeader> partitionInfos) |
protected List<KafkaTopicPartition> |
KafkaPartitionDiscoverer.getAllPartitionsForTopics(List<String> topics) |
protected abstract List<KafkaTopicPartition> |
AbstractPartitionDiscoverer.getAllPartitionsForTopics(List<String> topics)
Fetch the list of all partitions for a specific topics list from Kafka.
|
HashMap<KafkaTopicPartition,Long> |
AbstractFetcher.snapshotCurrentState()
Takes a snapshot of the partition offsets.
|
Modifier and Type | Method and Description |
---|---|
static int |
KafkaTopicPartitionAssigner.assign(KafkaTopicPartition partition,
int numParallelSubtasks)
Returns the index of the target subtask that a specific Kafka partition should be assigned
to.
|
int |
KafkaTopicPartition.Comparator.compare(KafkaTopicPartition p1,
KafkaTopicPartition p2) |
org.apache.kafka.common.TopicPartition |
KafkaFetcher.createKafkaPartitionHandle(KafkaTopicPartition partition) |
protected abstract KPH |
AbstractFetcher.createKafkaPartitionHandle(KafkaTopicPartition partition)
Creates the Kafka version specific representation of the given topic partition.
|
boolean |
AbstractPartitionDiscoverer.setAndCheckDiscoveredPartition(KafkaTopicPartition partition)
Sets a partition as discovered.
|
Modifier and Type | Method and Description |
---|---|
void |
AbstractFetcher.addDiscoveredPartitions(List<KafkaTopicPartition> newPartitions)
Adds a list of newly discovered partitions to the fetcher for consuming.
|
void |
AbstractFetcher.commitInternalOffsetsToKafka(Map<KafkaTopicPartition,Long> offsets,
KafkaCommitCallback commitCallback)
Commits the given partition offsets to the Kafka brokers (or to ZooKeeper for older Kafka
versions).
|
protected void |
KafkaFetcher.doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition,Long> offsets,
KafkaCommitCallback commitCallback) |
protected abstract void |
AbstractFetcher.doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition,Long> offsets,
KafkaCommitCallback commitCallback) |
static String |
KafkaTopicPartition.toString(List<KafkaTopicPartition> partitions) |
static String |
KafkaTopicPartition.toString(Map<KafkaTopicPartition,Long> map) |
Constructor and Description |
---|
KafkaTopicPartitionLeader(KafkaTopicPartition topicPartition,
org.apache.kafka.common.Node leader) |
KafkaTopicPartitionState(KafkaTopicPartition partition,
KPH kafkaPartitionHandle) |
KafkaTopicPartitionStateWithWatermarkGenerator(KafkaTopicPartition partition,
KPH kafkaPartitionHandle,
TimestampAssigner<T> timestampAssigner,
WatermarkGenerator<T> watermarkGenerator,
WatermarkOutput immediateOutput,
WatermarkOutput deferredOutput) |
Constructor and Description |
---|
AbstractFetcher(SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> seedPartitionsWithInitialOffsets,
SerializedValue<WatermarkStrategy<T>> watermarkStrategy,
ProcessingTimeService processingTimeProvider,
long autoWatermarkInterval,
ClassLoader userCodeClassLoader,
MetricGroup consumerMetricGroup,
boolean useMetrics) |
KafkaFetcher(SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets,
SerializedValue<WatermarkStrategy<T>> watermarkStrategy,
ProcessingTimeService processingTimeProvider,
long autoWatermarkInterval,
ClassLoader userCodeClassLoader,
String taskNameWithSubtasks,
KafkaDeserializationSchema<T> deserializer,
Properties kafkaProperties,
long pollTimeout,
MetricGroup subtaskMetricGroup,
MetricGroup consumerMetricGroup,
boolean useMetrics) |
KafkaShuffleFetcher(SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets,
SerializedValue<WatermarkStrategy<T>> watermarkStrategy,
ProcessingTimeService processingTimeProvider,
long autoWatermarkInterval,
ClassLoader userCodeClassLoader,
String taskNameWithSubtasks,
KafkaDeserializationSchema<T> deserializer,
Properties kafkaProperties,
long pollTimeout,
MetricGroup subtaskMetricGroup,
MetricGroup consumerMetricGroup,
boolean useMetrics,
TypeSerializer<T> typeSerializer,
int producerParallelism) |
Modifier and Type | Method and Description |
---|---|
protected AbstractFetcher<T,?> |
FlinkKafkaShuffleConsumer.createFetcher(SourceFunction.SourceContext<T> sourceContext,
Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets,
SerializedValue<WatermarkStrategy<T>> watermarkStrategy,
StreamingRuntimeContext runtimeContext,
OffsetCommitMode offsetCommitMode,
MetricGroup consumerMetricGroup,
boolean useMetrics) |
Modifier and Type | Field and Description |
---|---|
protected Map<KafkaTopicPartition,Long> |
KafkaDynamicSource.specificStartupOffsets
Specific startup offsets; only relevant when startup mode is
StartupMode.SPECIFIC_OFFSETS . |
Modifier and Type | Method and Description |
---|---|
protected KafkaDynamicSource |
KafkaDynamicTableFactory.createKafkaTableSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
String tableIdentifier) |
Constructor and Description |
---|
KafkaDynamicSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
boolean upsertMode,
String tableIdentifier) |
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.