Modifier and Type | Method and Description |
---|---|
EncodingFormat<SerializationSchema<RowData>> |
AsyncDynamicTableSinkFactory.AsyncDynamicSinkContext.getEncodingFormat() |
Modifier and Type | Method and Description |
---|---|
DataGeneratorSource<RowData> |
DataGenTableSource.createSource() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataGenerator.next() |
Modifier and Type | Class and Description |
---|---|
class |
EnrichedRowData
|
Modifier and Type | Method and Description |
---|---|
RowData |
EnrichedRowData.getRow(int pos,
int numFields) |
RowData |
RowDataPartitionComputer.projectColumnsToWrite(RowData in) |
Modifier and Type | Method and Description |
---|---|
BulkWriter<RowData> |
FileSystemTableSink.ProjectionBulkFactory.create(FSDataOutputStream out) |
TypeInformation<RowData> |
DeserializationSchemaAdapter.getProducedType() |
RecordAndPosition<RowData> |
ColumnarRowIterator.next() |
Modifier and Type | Method and Description |
---|---|
void |
SerializationSchemaAdapter.encode(RowData element,
OutputStream stream) |
static EnrichedRowData |
EnrichedRowData.from(RowData fixedRow,
List<String> producedRowFields,
List<String> mutableRowFields,
List<String> fixedRowFields)
Creates a new
EnrichedRowData with the provided fixedRow as the immutable
static row, and uses the producedRowFields , fixedRowFields and mutableRowFields arguments to compute the indexes mapping. |
LinkedHashMap<String,String> |
RowDataPartitionComputer.generatePartValues(RowData in) |
String |
FileSystemTableSink.TableBucketAssigner.getBucketId(RowData element,
BucketAssigner.Context context) |
RowData |
RowDataPartitionComputer.projectColumnsToWrite(RowData in) |
EnrichedRowData |
EnrichedRowData.replaceMutableRow(RowData mutableRow)
Replaces the mutable
RowData backing this EnrichedRowData . |
boolean |
FileSystemTableSink.TableRollingPolicy.shouldRollOnEvent(PartFileInfo<String> partFileState,
RowData element) |
Constructor and Description |
---|
EnrichedRowData(RowData fixedRow,
int[] indexMapping) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
BulkReaderFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions)
Creates a
BulkDecodingFormat from the given context and format options. |
Modifier and Type | Method and Description |
---|---|
KinesisFirehoseDynamicSink.KinesisFirehoseDynamicSinkBuilder |
KinesisFirehoseDynamicSink.KinesisFirehoseDynamicSinkBuilder.setEncodingFormat(EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
Constructor and Description |
---|
KinesisFirehoseDynamicSink(Integer maxBatchSize,
Integer maxInFlightRequests,
Integer maxBufferedRequests,
Long maxBufferSizeInBytes,
Long maxTimeInBufferMS,
Boolean failOnError,
DataType consumedDataType,
String deliveryStream,
Properties firehoseClientProperties,
EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.Mutation |
RowDataToMutationConverter.convertToMutation(RowData record) |
Modifier and Type | Method and Description |
---|---|
protected abstract InputFormat<RowData,?> |
AbstractHBaseDynamicTableSource.getInputFormat() |
Collection<RowData> |
HBaseRowDataLookupFunction.lookup(RowData keyRow)
The invoke entry point of lookup function.
|
Modifier and Type | Method and Description |
---|---|
Collection<RowData> |
HBaseRowDataLookupFunction.lookup(RowData keyRow)
The invoke entry point of lookup function.
|
Modifier and Type | Method and Description |
---|---|
RowData |
HBaseSerde.convertToNewRow(org.apache.hadoop.hbase.client.Result result)
Converts HBase
Result into a new RowData instance. |
RowData |
HBaseSerde.convertToReusedRow(org.apache.hadoop.hbase.client.Result result)
Converts HBase
Result into a reused RowData instance. |
RowData |
HBaseSerde.convertToRow(org.apache.hadoop.hbase.client.Result result)
Deprecated.
Use
HBaseSerde.convertToReusedRow(Result) instead. |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.Delete |
HBaseSerde.createDeleteMutation(RowData row)
Returns an instance of Delete that remove record from HBase table.
|
org.apache.hadoop.hbase.client.Put |
HBaseSerde.createPutMutation(RowData row)
Returns an instance of Put that writes record to HBase table.
|
Modifier and Type | Method and Description |
---|---|
protected RowData |
HBaseRowDataInputFormat.mapResultToOutType(org.apache.hadoop.hbase.client.Result res) |
Modifier and Type | Method and Description |
---|---|
InputFormat<RowData,?> |
HBaseDynamicTableSource.getInputFormat() |
Modifier and Type | Method and Description |
---|---|
protected RowData |
HBaseRowDataInputFormat.mapResultToOutType(org.apache.hadoop.hbase.client.Result res) |
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Collection<RowData>> |
HBaseRowDataAsyncLookupFunction.asyncLookup(RowData keyRow)
The invoke entry point of lookup function.
|
protected InputFormat<RowData,?> |
HBaseDynamicTableSource.getInputFormat() |
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Collection<RowData>> |
HBaseRowDataAsyncLookupFunction.asyncLookup(RowData keyRow)
The invoke entry point of lookup function.
|
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractJdbcRowConverter.toInternal(ResultSet resultSet) |
RowData |
JdbcRowConverter.toInternal(ResultSet resultSet)
|
Modifier and Type | Method and Description |
---|---|
void |
AbstractJdbcRowConverter.JdbcSerializationConverter.serialize(RowData rowData,
int index,
FieldNamedPreparedStatement statement) |
FieldNamedPreparedStatement |
AbstractJdbcRowConverter.toExternal(RowData rowData,
FieldNamedPreparedStatement statement) |
FieldNamedPreparedStatement |
JdbcRowConverter.toExternal(RowData rowData,
FieldNamedPreparedStatement statement)
Convert data retrieved from Flink internal RowData to JDBC Object.
|
Modifier and Type | Method and Description |
---|---|
void |
TableBufferedStatementExecutor.addToBatch(RowData record) |
void |
TableSimpleStatementExecutor.addToBatch(RowData record) |
void |
TableInsertOrUpdateStatementExecutor.addToBatch(RowData record) |
void |
TableBufferReducedStatementExecutor.addToBatch(RowData record) |
Modifier and Type | Method and Description |
---|---|
RowData |
JdbcRowDataInputFormat.nextRecord(RowData reuse)
Stores the next resultSet row in a tuple.
|
Modifier and Type | Method and Description |
---|---|
JdbcOutputFormat<RowData,?,?> |
JdbcOutputFormatBuilder.build() |
TypeInformation<RowData> |
JdbcRowDataInputFormat.getProducedType() |
Collection<RowData> |
JdbcRowDataLookupFunction.lookup(RowData keyRow)
This is a lookup method which is called by Flink framework in runtime.
|
Modifier and Type | Method and Description |
---|---|
Collection<RowData> |
JdbcRowDataLookupFunction.lookup(RowData keyRow)
This is a lookup method which is called by Flink framework in runtime.
|
RowData |
JdbcRowDataInputFormat.nextRecord(RowData reuse)
Stores the next resultSet row in a tuple.
|
Modifier and Type | Method and Description |
---|---|
JdbcOutputFormatBuilder |
JdbcOutputFormatBuilder.setRowDataTypeInfo(TypeInformation<RowData> rowDataTypeInfo) |
JdbcRowDataInputFormat.Builder |
JdbcRowDataInputFormat.Builder.setRowDataTypeInfo(TypeInformation<RowData> rowDataTypeInfo) |
Modifier and Type | Method and Description |
---|---|
static PartitionKeyGenerator<RowData> |
KinesisPartitionKeyGeneratorFactory.getKinesisPartitioner(ReadableConfig tableOptions,
RowType physicalType,
List<String> partitionKeys,
ClassLoader classLoader)
Constructs the kinesis partitioner for a
targetTable based on the currently set
tableOptions . |
Modifier and Type | Method and Description |
---|---|
String |
RowDataFieldsKinesisPartitionKeyGenerator.apply(RowData element) |
Modifier and Type | Method and Description |
---|---|
KinesisDynamicSink.KinesisDynamicTableSinkBuilder |
KinesisDynamicSink.KinesisDynamicTableSinkBuilder.setEncodingFormat(EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
KinesisDynamicSink.KinesisDynamicTableSinkBuilder |
KinesisDynamicSink.KinesisDynamicTableSinkBuilder.setPartitioner(PartitionKeyGenerator<RowData> partitioner) |
Constructor and Description |
---|
KinesisDynamicSink(Integer maxBatchSize,
Integer maxInFlightRequests,
Integer maxBufferedRequests,
Long maxBufferSizeInBytes,
Long maxTimeInBufferMS,
Boolean failOnError,
DataType consumedDataType,
String stream,
Properties kinesisClientProperties,
EncodingFormat<SerializationSchema<RowData>> encodingFormat,
PartitionKeyGenerator<RowData> partitioner) |
KinesisDynamicSink(Integer maxBatchSize,
Integer maxInFlightRequests,
Integer maxBufferedRequests,
Long maxBufferSizeInBytes,
Long maxTimeInBufferMS,
Boolean failOnError,
DataType consumedDataType,
String stream,
Properties kinesisClientProperties,
EncodingFormat<SerializationSchema<RowData>> encodingFormat,
PartitionKeyGenerator<RowData> partitioner) |
Modifier and Type | Method and Description |
---|---|
ExternalSystemDataReader<RowData> |
TableSinkExternalContext.createSinkRowDataReader(TestingSinkSettings sinkOptions,
DataType dataType)
Create a new split in the external system and return a data writer corresponding to the new
split.
|
Modifier and Type | Method and Description |
---|---|
ExternalSystemSplitDataWriter<RowData> |
TableSourceExternalContext.createSplitRowDataWriter(TestingSourceSettings sourceOptions,
DataType dataType)
Create a new split in the external system and return a data writer for writing
RowData corresponding to the new split. |
Modifier and Type | Method and Description |
---|---|
HiveSource<RowData> |
HiveSourceBuilder.buildWithDefaultBulkFormat()
Builds HiveSource with default built-in BulkFormat that returns records in type of RowData.
|
protected BulkFormat<RowData,HiveSourceSplit> |
HiveSourceBuilder.createDefaultBulkFormat() |
protected DataStream<RowData> |
HiveTableSource.getDataStream(ProviderContext providerContext,
StreamExecutionEnvironment execEnv) |
PartitionReader<P,RowData> |
FileSystemLookupFunction.getPartitionReader() |
TypeInformation<RowData> |
FileSystemLookupFunction.getResultType() |
Modifier and Type | Method and Description |
---|---|
LinkedHashMap<String,String> |
HiveRowDataPartitionComputer.generatePartValues(RowData in) |
Constructor and Description |
---|
FileSystemLookupFunction(PartitionFetcher<P> partitionFetcher,
PartitionFetcher.Context<P> fetcherContext,
PartitionReader<P,RowData> partitionReader,
RowType rowType,
int[] lookupKeys,
java.time.Duration reloadInterval) |
Modifier and Type | Method and Description |
---|---|
RowData |
HiveTableInputFormat.nextRecord(RowData reuse) |
RowData |
HiveTableFileInputFormat.nextRecord(RowData reuse) |
RowData |
SplitReader.nextRecord(RowData reuse)
Reads the next record from the input.
|
RowData |
HiveVectorizedParquetSplitReader.nextRecord(RowData reuse) |
RowData |
HiveVectorizedOrcSplitReader.nextRecord(RowData reuse) |
RowData |
HiveMapredSplitReader.nextRecord(RowData reuse) |
RowData |
HiveInputFormatPartitionReader.read(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
CompactReader<RowData> |
HiveCompactReaderFactory.create(CompactContext context) |
BulkFormat.Reader<RowData> |
HiveInputFormat.createReader(Configuration config,
HiveSourceSplit split) |
TypeInformation<RowData> |
HiveInputFormat.getProducedType() |
BulkFormat.Reader<RowData> |
HiveInputFormat.restoreReader(Configuration config,
HiveSourceSplit split) |
Modifier and Type | Method and Description |
---|---|
RowData |
HiveTableInputFormat.nextRecord(RowData reuse) |
RowData |
HiveTableFileInputFormat.nextRecord(RowData reuse) |
RowData |
SplitReader.nextRecord(RowData reuse)
Reads the next record from the input.
|
RowData |
HiveVectorizedParquetSplitReader.nextRecord(RowData reuse) |
RowData |
HiveVectorizedOrcSplitReader.nextRecord(RowData reuse) |
RowData |
HiveMapredSplitReader.nextRecord(RowData reuse) |
RowData |
HiveInputFormatPartitionReader.read(RowData reuse) |
default void |
SplitReader.seekToRow(long rowCount,
RowData reuse)
Seek to a particular row number.
|
void |
HiveVectorizedParquetSplitReader.seekToRow(long rowCount,
RowData reuse) |
void |
HiveVectorizedOrcSplitReader.seekToRow(long rowCount,
RowData reuse) |
Modifier and Type | Method and Description |
---|---|
HadoopPathBasedBulkWriter<RowData> |
HiveBulkWriterFactory.create(org.apache.hadoop.fs.Path targetPath,
org.apache.hadoop.fs.Path inProgressPath) |
java.util.function.Function<RowData,org.apache.hadoop.io.Writable> |
HiveWriterFactory.createRowDataConverter() |
Modifier and Type | Method and Description |
---|---|
RowData |
AvroRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
AvroFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DecodingFormat<DeserializationSchema<RowData>> |
AvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
AvroFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
AvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
AvroRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
AvroRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
AvroRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
AvroRowDataDeserializationSchema(DeserializationSchema<org.apache.avro.generic.GenericRecord> nestedSchema,
AvroToRowDataConverters.AvroToRowDataConverter runtimeConverter,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
AvroRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
RegistryAvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
RegistryAvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
Modifier and Type | Method and Description |
---|---|
RowData |
DebeziumAvroDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
DebeziumAvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
DebeziumAvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
DebeziumAvroDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
DebeziumAvroDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
DebeziumAvroSerializationSchema.serialize(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
void |
DebeziumAvroDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
DebeziumAvroDeserializationSchema(RowType rowType,
TypeInformation<RowData> producedTypeInfo,
String schemaRegistryUrl,
Map<String,?> registryConfigs) |
Modifier and Type | Method and Description |
---|---|
RowData |
CsvRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
static BulkWriter.Factory<RowData> |
PythonCsvUtils.createCsvBulkWriterFactory(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema schema,
DataType physicalDataType)
Util for creating a
BulkWriter.Factory that wraps CsvBulkWriter.forSchema(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper, org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema, org.apache.flink.formats.common.Converter<T, R, C>, C, org.apache.flink.core.fs.FSDataOutputStream) . |
BulkDecodingFormat<RowData> |
CsvFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DecodingFormat<DeserializationSchema<RowData>> |
CsvFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
CsvFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
CsvFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
BulkFormat<RowData,FileSourceSplit> |
CsvFileFormatFactory.CsvBulkDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType,
int[][] projections) |
TypeInformation<RowData> |
CsvRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
CsvRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
CsvRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
Builder(RowType rowReadType,
RowType rowResultType,
TypeInformation<RowData> resultTypeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with optional
parameters. |
Builder(RowType rowType,
TypeInformation<RowData> resultTypeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with optional
parameters. |
Modifier and Type | Method and Description |
---|---|
RowData |
JsonRowDataDeserializationSchema.convertToRowData(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode message) |
RowData |
JsonRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
JsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
JsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
JsonRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
JsonRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
JsonRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
JsonRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean failOnMissingField,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
RowData |
CanalJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
CanalJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
CanalJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
CanalJsonDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType,
int[][] projections) |
TypeInformation<RowData> |
CanalJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
CanalJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
CanalJsonSerializationSchema.serialize(RowData row) |
Modifier and Type | Method and Description |
---|---|
static CanalJsonDeserializationSchema.Builder |
CanalJsonDeserializationSchema.builder(DataType physicalDataType,
List<org.apache.flink.formats.json.canal.CanalJsonDecodingFormat.ReadableMetadata> requestedMetadata,
TypeInformation<RowData> producedTypeInfo)
Creates A builder for building a
CanalJsonDeserializationSchema . |
void |
CanalJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
RowData |
DebeziumJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
DebeziumJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
DebeziumJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
DebeziumJsonDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType,
int[][] projections) |
TypeInformation<RowData> |
DebeziumJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
DebeziumJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
DebeziumJsonSerializationSchema.serialize(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
void |
DebeziumJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
DebeziumJsonDeserializationSchema(DataType physicalDataType,
List<org.apache.flink.formats.json.debezium.DebeziumJsonDecodingFormat.ReadableMetadata> requestedMetadata,
TypeInformation<RowData> producedTypeInfo,
boolean schemaInclude,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
RowData |
MaxwellJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
MaxwellJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
MaxwellJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
MaxwellJsonDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType,
int[][] projections) |
TypeInformation<RowData> |
MaxwellJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
MaxwellJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
MaxwellJsonSerializationSchema.serialize(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
MaxwellJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
MaxwellJsonDeserializationSchema(DataType physicalDataType,
List<org.apache.flink.formats.json.maxwell.MaxwellJsonDecodingFormat.ReadableMetadata> requestedMetadata,
TypeInformation<RowData> producedTypeInfo,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
RowData |
OggJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
OggJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
OggJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
OggJsonDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType) |
TypeInformation<RowData> |
OggJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
OggJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
OggJsonSerializationSchema.serialize(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
void |
OggJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
OggJsonDeserializationSchema(DataType physicalDataType,
List<org.apache.flink.formats.json.ogg.OggJsonDecodingFormat.ReadableMetadata> requestedMetadata,
TypeInformation<RowData> producedTypeInfo,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
ParquetFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
ParquetFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
protected ParquetVectorizedInputFormat.ParquetReaderBatch<RowData> |
ParquetColumnarRowInputFormat.createReaderBatch(WritableColumnVector[] writableVectors,
VectorizedColumnBatch columnarBatch,
Pool.Recycler<ParquetVectorizedInputFormat.ParquetReaderBatch<RowData>> recycler) |
BulkFormat<RowData,FileSourceSplit> |
ParquetFileFormatFactory.ParquetBulkDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context sourceContext,
DataType producedDataType,
int[][] projections) |
TypeInformation<RowData> |
ParquetColumnarRowInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
ParquetColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType producedRowType,
TypeInformation<RowData> producedTypeInfo,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Create a partitioned
ParquetColumnarRowInputFormat , the partition columns can be
generated by Path . |
protected ParquetVectorizedInputFormat.ParquetReaderBatch<RowData> |
ParquetColumnarRowInputFormat.createReaderBatch(WritableColumnVector[] writableVectors,
VectorizedColumnBatch columnarBatch,
Pool.Recycler<ParquetVectorizedInputFormat.ParquetReaderBatch<RowData>> recycler) |
Constructor and Description |
---|
ParquetColumnarRowInputFormat(Configuration hadoopConfig,
RowType projectedType,
TypeInformation<RowData> producedTypeInfo,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Constructor to create parquet format without extra fields.
|
Modifier and Type | Method and Description |
---|---|
org.apache.parquet.hadoop.ParquetWriter<RowData> |
ParquetRowDataBuilder.FlinkParquetBuilder.createWriter(org.apache.parquet.io.OutputFile out) |
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
protected org.apache.parquet.hadoop.api.WriteSupport<RowData> |
ParquetRowDataBuilder.getWriteSupport(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
void |
ParquetRowDataWriter.write(RowData record)
It writes a record to Parquet.
|
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
PbFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
PbFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
PbDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType producedDataType) |
SerializationSchema<RowData> |
PbEncodingFormat.createRuntimeEncoder(DynamicTableSink.Context context,
DataType consumedDataType) |
Modifier and Type | Method and Description |
---|---|
RowData |
ProtoToRowConverter.convertProtoBinaryToRow(byte[] data) |
RowData |
PbRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<RowData> |
PbRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
PbRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
Constructor and Description |
---|
PbRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
PbFormatConfig formatConfig) |
Modifier and Type | Method and Description |
---|---|
byte[] |
RowToProtoConverter.convertRowToProtoBinary(RowData rowData) |
byte[] |
PbRowDataSerializationSchema.serialize(RowData element) |
Modifier and Type | Method and Description |
---|---|
RowData |
RawFormatDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
RawFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
RawFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
RawFormatDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
RawFormatDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
RawFormatSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
RawFormatDeserializationSchema(LogicalType deserializedType,
TypeInformation<RowData> producedTypeInfo,
String charsetName,
boolean isBigEndian) |
Modifier and Type | Method and Description |
---|---|
RowData |
OrcColumnarRowSplitReader.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
OrcFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
OrcFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT> |
OrcColumnarRowInputFormat.createReaderBatch(SplitT split,
OrcVectorizedBatchWrapper<BatchT> orcBatch,
Pool.Recycler<AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT>> recycler,
int batchSize) |
BulkFormat<RowData,FileSourceSplit> |
OrcFileFormatFactory.OrcBulkDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context sourceContext,
DataType producedDataType,
int[][] projections) |
TypeInformation<RowData> |
OrcColumnarRowInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
OrcColumnarRowSplitReader.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcColumnarRowInputFormat.createPartitionedFormat(OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> shim,
Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT> |
OrcColumnarRowInputFormat.createReaderBatch(SplitT split,
OrcVectorizedBatchWrapper<BatchT> orcBatch,
Pool.Recycler<AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT>> recycler,
int batchSize) |
Constructor and Description |
---|
OrcColumnarRowInputFormat(OrcShim<BatchT> shim,
Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
ColumnBatchFactory<BatchT,SplitT> batchFactory,
TypeInformation<RowData> producedTypeInfo) |
Modifier and Type | Method and Description |
---|---|
BulkWriter<RowData> |
OrcNoHiveBulkWriterFactory.create(FSDataOutputStream out) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcNoHiveColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
Modifier and Type | Method and Description |
---|---|
void |
RowDataVectorizer.vectorize(RowData row,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
Modifier and Type | Method and Description |
---|---|
void |
PythonConnectorUtils.RowRowMapper.processElement(Row row,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
RowData |
PythonTypeUtils.RowDataDataConverter.toInternal(Object[] value) |
Modifier and Type | Method and Description |
---|---|
Object[] |
PythonTypeUtils.RowDataDataConverter.toExternal(RowData value) |
Modifier and Type | Field and Description |
---|---|
protected DecodingFormat<DeserializationSchema<RowData>> |
KafkaDynamicSource.keyDecodingFormat
Optional format for decoding keys from Kafka.
|
protected EncodingFormat<SerializationSchema<RowData>> |
KafkaDynamicSink.keyEncodingFormat
Optional format for encoding keys to Kafka.
|
protected FlinkKafkaPartitioner<RowData> |
KafkaDynamicSink.partitioner
Partitioner to select Kafka partition for each item.
|
protected DecodingFormat<DeserializationSchema<RowData>> |
KafkaDynamicSource.valueDecodingFormat
Format for decoding values from Kafka.
|
protected EncodingFormat<SerializationSchema<RowData>> |
KafkaDynamicSink.valueEncodingFormat
Format for encoding values to Kafka.
|
protected WatermarkStrategy<RowData> |
KafkaDynamicSource.watermarkStrategy
Watermark strategy that is used to generate per-partition watermark.
|
Modifier and Type | Method and Description |
---|---|
protected KafkaSource<RowData> |
KafkaDynamicSource.createKafkaSource(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
DeserializationSchema<RowData> |
UpsertKafkaDynamicTableFactory.DecodingFormatWrapper.createRuntimeDecoder(DynamicTableSource.Context context,
DataType producedDataType) |
SerializationSchema<RowData> |
UpsertKafkaDynamicTableFactory.EncodingFormatWrapper.createRuntimeEncoder(DynamicTableSink.Context context,
DataType consumedDataType) |
Modifier and Type | Method and Description |
---|---|
void |
KafkaDynamicSource.applyWatermark(WatermarkStrategy<RowData> watermarkStrategy) |
protected KafkaSource<RowData> |
KafkaDynamicSource.createKafkaSource(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected KafkaSource<RowData> |
KafkaDynamicSource.createKafkaSource(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected KafkaSource<RowData> |
KafkaDynamicSource.createKafkaSource(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected KafkaDynamicSink |
KafkaDynamicTableFactory.createKafkaTableSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
Integer parallelism,
String transactionalIdPrefix) |
protected KafkaDynamicSink |
KafkaDynamicTableFactory.createKafkaTableSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
Integer parallelism,
String transactionalIdPrefix) |
protected KafkaDynamicSink |
KafkaDynamicTableFactory.createKafkaTableSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
Integer parallelism,
String transactionalIdPrefix) |
protected KafkaDynamicSource |
KafkaDynamicTableFactory.createKafkaTableSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
String tableIdentifier) |
protected KafkaDynamicSource |
KafkaDynamicTableFactory.createKafkaTableSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
String tableIdentifier) |
Constructor and Description |
---|
DecodingFormatWrapper(DecodingFormat<DeserializationSchema<RowData>> innerDecodingFormat) |
EncodingFormatWrapper(EncodingFormat<SerializationSchema<RowData>> innerEncodingFormat) |
KafkaDynamicSink(DataType consumedDataType,
DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
boolean upsertMode,
SinkBufferFlushMode flushMode,
Integer parallelism,
String transactionalIdPrefix) |
KafkaDynamicSink(DataType consumedDataType,
DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
boolean upsertMode,
SinkBufferFlushMode flushMode,
Integer parallelism,
String transactionalIdPrefix) |
KafkaDynamicSink(DataType consumedDataType,
DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
boolean upsertMode,
SinkBufferFlushMode flushMode,
Integer parallelism,
String transactionalIdPrefix) |
KafkaDynamicSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
boolean upsertMode,
String tableIdentifier) |
KafkaDynamicSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
boolean upsertMode,
String tableIdentifier) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataKinesisDeserializationSchema.deserialize(byte[] recordValue,
String partitionKey,
String seqNum,
long approxArrivalTimestamp,
String stream,
String shardId) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<RowData> |
RowDataKinesisDeserializationSchema.getProducedType() |
Constructor and Description |
---|
KinesisDynamicSource(DataType physicalDataType,
String stream,
Properties consumerProperties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat) |
KinesisDynamicSource(DataType physicalDataType,
String stream,
Properties consumerProperties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
DataType producedDataType,
List<RowDataKinesisDeserializationSchema.Metadata> requestedMetadataFields) |
RowDataKinesisDeserializationSchema(DeserializationSchema<RowData> physicalDeserializer,
TypeInformation<RowData> producedTypeInfo,
List<RowDataKinesisDeserializationSchema.Metadata> requestedMetadataFields) |
RowDataKinesisDeserializationSchema(DeserializationSchema<RowData> physicalDeserializer,
TypeInformation<RowData> producedTypeInfo,
List<RowDataKinesisDeserializationSchema.Metadata> requestedMetadataFields) |
Modifier and Type | Method and Description |
---|---|
CloseableIterator<RowData> |
TableResultInternal.collectInternal()
Returns an iterator that returns the iterator with the internal row data type.
|
CloseableIterator<RowData> |
TableResultImpl.collectInternal() |
CloseableIterator<RowData> |
ResultProvider.toInternalIterator()
Returns the select result as row iterator using internal data types.
|
CloseableIterator<RowData> |
StaticResultProvider.toInternalIterator() |
Constructor and Description |
---|
StaticResultProvider(List<Row> rows,
java.util.function.Function<Row,RowData> externalToInternalConverter) |
Modifier and Type | Method and Description |
---|---|
BulkWriter.Factory<RowData> |
HiveShimV200.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShimV100.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShim.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes)
Create orc
BulkWriter.Factory for different hive versions. |
Modifier and Type | Method and Description |
---|---|
TypedResult<List<RowData>> |
Executor.retrieveResultChanges(String sessionId,
String resultId)
Asks for the next changelog results (non-blocking).
|
List<RowData> |
Executor.retrieveResultPage(String resultId,
int page)
Returns the rows that are part of the current page or throws an exception if the snapshot has
been expired.
|
Modifier and Type | Method and Description |
---|---|
TypedResult<List<RowData>> |
LocalExecutor.retrieveResultChanges(String sessionId,
String resultId) |
List<RowData> |
LocalExecutor.retrieveResultPage(String resultId,
int page) |
Modifier and Type | Field and Description |
---|---|
protected List<RowData> |
MaterializedCollectResultBase.materializedTable
Materialized table that is continuously updated by inserts and deletes.
|
Modifier and Type | Method and Description |
---|---|
protected List<RowData> |
MaterializedCollectResultBase.getMaterializedTable() |
TypedResult<List<RowData>> |
ChangelogResult.retrieveChanges()
Retrieves the available result records.
|
TypedResult<List<RowData>> |
ChangelogCollectResult.retrieveChanges() |
List<RowData> |
MaterializedCollectResultBase.retrievePage(int page) |
List<RowData> |
MaterializedResult.retrievePage(int page)
Retrieves a page of a snapshotted result.
|
Modifier and Type | Method and Description |
---|---|
protected void |
MaterializedCollectBatchResult.processRecord(RowData row) |
protected void |
MaterializedCollectStreamResult.processRecord(RowData row) |
protected abstract void |
CollectResultBase.processRecord(RowData row) |
protected void |
ChangelogCollectResult.processRecord(RowData row) |
Modifier and Type | Method and Description |
---|---|
OutputFormat<RowData> |
OutputFormatProvider.createOutputFormat()
Creates an
OutputFormat instance. |
Sink<RowData,?,?,?> |
SinkProvider.createSink()
Deprecated.
Creates a
Sink instance. |
Sink<RowData> |
SinkV2Provider.createSink() |
SinkFunction<RowData> |
SinkFunctionProvider.createSinkFunction()
Creates a
SinkFunction instance. |
Modifier and Type | Method and Description |
---|---|
default DataStreamSink<?> |
DataStreamSinkProvider.consumeDataStream(DataStream<RowData> dataStream)
Deprecated.
Use
DataStreamSinkProvider.consumeDataStream(ProviderContext, DataStream)
and correctly set a unique identifier for each data stream transformation. |
default DataStreamSink<?> |
DataStreamSinkProvider.consumeDataStream(ProviderContext providerContext,
DataStream<RowData> dataStream)
Consumes the given Java
DataStream and returns the sink transformation DataStreamSink . |
static OutputFormatProvider |
OutputFormatProvider.of(OutputFormat<RowData> outputFormat)
Helper method for creating a static provider.
|
static OutputFormatProvider |
OutputFormatProvider.of(OutputFormat<RowData> outputFormat,
Integer sinkParallelism)
Helper method for creating a static provider with a provided sink parallelism.
|
static SinkProvider |
SinkProvider.of(Sink<RowData,?,?,?> sink)
Deprecated.
Helper method for creating a static provider.
|
static SinkProvider |
SinkProvider.of(Sink<RowData,?,?,?> sink,
Integer sinkParallelism)
Deprecated.
Helper method for creating a Sink provider with a provided sink parallelism.
|
static SinkV2Provider |
SinkV2Provider.of(Sink<RowData> sink)
Helper method for creating a static provider.
|
static SinkV2Provider |
SinkV2Provider.of(Sink<RowData> sink,
Integer sinkParallelism)
Helper method for creating a Sink provider with a provided sink parallelism.
|
static SinkFunctionProvider |
SinkFunctionProvider.of(SinkFunction<RowData> sinkFunction)
Helper method for creating a static provider.
|
static SinkFunctionProvider |
SinkFunctionProvider.of(SinkFunction<RowData> sinkFunction,
Integer sinkParallelism)
Helper method for creating a SinkFunction provider with a provided sink parallelism.
|
Modifier and Type | Method and Description |
---|---|
InputFormat<RowData,?> |
InputFormatProvider.createInputFormat()
Creates an
InputFormat instance. |
Source<RowData,?,?> |
SourceProvider.createSource()
Creates a
Source instance. |
SourceFunction<RowData> |
SourceFunctionProvider.createSourceFunction()
Creates a
SourceFunction instance. |
Collection<RowData> |
DynamicFilteringData.getData() |
default DataStream<RowData> |
DataStreamScanProvider.produceDataStream(ProviderContext providerContext,
StreamExecutionEnvironment execEnv)
Creates a scan Java
DataStream from a StreamExecutionEnvironment . |
default DataStream<RowData> |
DataStreamScanProvider.produceDataStream(StreamExecutionEnvironment execEnv)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
boolean |
DynamicFilteringData.contains(RowData row)
Returns true if the dynamic filtering data contains the specific row.
|
Modifier and Type | Method and Description |
---|---|
static InputFormatProvider |
InputFormatProvider.of(InputFormat<RowData,?> inputFormat)
Helper method for creating a static provider.
|
static SourceProvider |
SourceProvider.of(Source<RowData,?,?> source)
Helper method for creating a static provider.
|
static SourceFunctionProvider |
SourceFunctionProvider.of(SourceFunction<RowData> sourceFunction,
boolean isBounded)
Helper method for creating a static provider.
|
Constructor and Description |
---|
DynamicFilteringData(TypeInformation<RowData> typeInfo,
RowType rowType,
List<byte[]> serializedData,
boolean isFiltering) |
Modifier and Type | Method and Description |
---|---|
void |
SupportsWatermarkPushDown.applyWatermark(WatermarkStrategy<RowData> watermarkStrategy)
Provides a
WatermarkStrategy which defines how to generate Watermark s in the
stream source. |
Modifier and Type | Method and Description |
---|---|
Collection<RowData> |
DefaultLookupCache.getIfPresent(RowData key) |
Collection<RowData> |
LookupCache.getIfPresent(RowData key)
Returns the value associated with key in this cache, or null if there is no cached value for
key.
|
Collection<RowData> |
DefaultLookupCache.put(RowData key,
Collection<RowData> value) |
Collection<RowData> |
LookupCache.put(RowData key,
Collection<RowData> value)
Associates the specified value rows with the specified key row in the cache.
|
Modifier and Type | Method and Description |
---|---|
Collection<RowData> |
DefaultLookupCache.getIfPresent(RowData key) |
Collection<RowData> |
LookupCache.getIfPresent(RowData key)
Returns the value associated with key in this cache, or null if there is no cached value for
key.
|
void |
DefaultLookupCache.invalidate(RowData key) |
void |
LookupCache.invalidate(RowData key)
Discards any cached value for the specified key.
|
Collection<RowData> |
DefaultLookupCache.put(RowData key,
Collection<RowData> value) |
Collection<RowData> |
LookupCache.put(RowData key,
Collection<RowData> value)
Associates the specified value rows with the specified key row in the cache.
|
Modifier and Type | Method and Description |
---|---|
Collection<RowData> |
DefaultLookupCache.put(RowData key,
Collection<RowData> value) |
Collection<RowData> |
LookupCache.put(RowData key,
Collection<RowData> value)
Associates the specified value rows with the specified key row in the cache.
|
Modifier and Type | Class and Description |
---|---|
class |
BoxedWrapperRowData
An implementation of
RowData which also is also backed by an array of Java Object , just similar to GenericRowData . |
class |
GenericRowData
An internal data structure representing data of
RowType and other (possibly nested)
structured types such as StructuredType . |
class |
UpdatableRowData
|
Modifier and Type | Method and Description |
---|---|
RowData |
UpdatableRowData.getRow() |
RowData |
UpdatableRowData.getRow(int pos,
int numFields) |
RowData |
BoxedWrapperRowData.getRow(int pos,
int numFields) |
RowData |
ArrayData.getRow(int pos,
int numFields)
Returns the row value at the given position.
|
RowData |
RowData.getRow(int pos,
int numFields)
Returns the row value at the given position.
|
RowData |
GenericArrayData.getRow(int pos,
int numFields) |
RowData |
GenericRowData.getRow(int pos,
int numFields) |
Modifier and Type | Method and Description |
---|---|
Object |
RowData.FieldGetter.getFieldOrNull(RowData row) |
Constructor and Description |
---|
UpdatableRowData(RowData row,
int arity) |
Modifier and Type | Class and Description |
---|---|
class |
BinaryRowData
An implementation of
RowData which is backed by MemorySegment instead of Object. |
class |
NestedRowData
Its memory storage structure is exactly the same with
BinaryRowData . |
Modifier and Type | Method and Description |
---|---|
RowData |
BinaryArrayData.getRow(int pos,
int numFields) |
RowData |
NestedRowData.getRow(int pos,
int numFields) |
RowData |
BinaryRowData.getRow(int pos,
int numFields) |
static RowData |
BinarySegmentUtils.readRowData(MemorySegment[] segments,
int numFields,
int baseOffset,
long offsetAndSize)
Gets an instance of
RowData from underlying MemorySegment . |
Modifier and Type | Method and Description |
---|---|
NestedRowData |
NestedRowData.copy(RowData reuse) |
Modifier and Type | Class and Description |
---|---|
class |
ColumnarRowData
Columnar row to support access to vector column data.
|
Modifier and Type | Method and Description |
---|---|
RowData |
ColumnarRowData.getRow(int pos,
int numFields) |
RowData |
ColumnarArrayData.getRow(int pos,
int numFields) |
Modifier and Type | Method and Description |
---|---|
RowData |
VectorizedColumnBatch.getRow(int rowId,
int colId) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowRowConverter.toInternal(Row external) |
RowData |
StructuredObjectConverter.toInternal(T external) |
Modifier and Type | Method and Description |
---|---|
T |
StructuredObjectConverter.toExternal(RowData internal) |
Row |
RowRowConverter.toExternal(RowData internal) |
Modifier and Type | Method and Description |
---|---|
static boolean |
RowDataUtil.isAccumulateMsg(RowData row)
Returns true if the message is either
RowKind.INSERT or RowKind.UPDATE_AFTER ,
which refers to an accumulate operation of aggregation. |
static boolean |
RowDataUtil.isRetractMsg(RowData row)
Returns true if the message is either
RowKind.DELETE or RowKind.UPDATE_BEFORE , which refers to a retract operation of aggregation. |
External |
DataFormatConverters.DataFormatConverter.toExternal(RowData row,
int column)
Given a internalType row, convert the value at column `column` to its external(Java)
equivalent.
|
Modifier and Type | Class and Description |
---|---|
class |
JoinedRowData
|
class |
ProjectedRowData
|
Modifier and Type | Method and Description |
---|---|
RowData |
JoinedRowData.getRow(int pos,
int numFields) |
RowData |
ProjectedRowData.getRow(int pos,
int numFields) |
Modifier and Type | Method and Description |
---|---|
JoinedRowData |
JoinedRowData.replace(RowData row1,
RowData row2)
Replaces the
RowData backing this JoinedRowData . |
ProjectedRowData |
ProjectedRowData.replaceRow(RowData row)
Replaces the underlying
RowData backing this ProjectedRowData . |
Constructor and Description |
---|
JoinedRowData(RowData row1,
RowData row2)
Creates a new
JoinedRowData of kind RowKind.INSERT backed by
and . |
JoinedRowData(RowKind rowKind,
RowData row1,
RowData row2)
Creates a new
JoinedRowData of kind backed by and
. |
Modifier and Type | Method and Description |
---|---|
void |
BinaryWriter.writeRow(int pos,
RowData value,
RowDataSerializer serializer) |
Modifier and Type | Method and Description |
---|---|
static org.apache.hive.service.rpc.thrift.TRowSet |
ThriftObjectConversions.toColumnBasedSet(List<LogicalType> fieldTypes,
List<RowData.FieldGetter> fieldGetters,
List<RowData> rows) |
static org.apache.hive.service.rpc.thrift.TRowSet |
ThriftObjectConversions.toRowBasedSet(List<LogicalType> fieldTypes,
List<RowData.FieldGetter> fieldGetters,
List<RowData> rows) |
static org.apache.hive.service.rpc.thrift.TRowSet |
ThriftObjectConversions.toTRowSet(org.apache.hive.service.rpc.thrift.TProtocolVersion version,
ResolvedSchema schema,
List<RowData> data)
Similar to
SerDeUtils.toThriftPayload(Object, ObjectInspector, int) that converts the
returned Rows to JSON string. |
Modifier and Type | Method and Description |
---|---|
RowData |
ChangelogCsvDeserializer.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
ChangelogCsvFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
ChangelogCsvFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType producedDataType) |
TypeInformation<RowData> |
ChangelogCsvDeserializer.getProducedType() |
TypeInformation<RowData> |
SocketSourceFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
ChangelogCsvDeserializer.isEndOfStream(RowData nextElement) |
Modifier and Type | Method and Description |
---|---|
void |
SocketSourceFunction.run(SourceFunction.SourceContext<RowData> ctx) |
Constructor and Description |
---|
ChangelogCsvDeserializer(List<LogicalType> parsingTypes,
DynamicTableSource.DataStructureConverter converter,
TypeInformation<RowData> producedTypeInfo,
String columnDelimiter) |
SocketDynamicTableSource(String hostname,
int port,
byte byteDelimiter,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
DataType producedDataType) |
SocketSourceFunction(String hostname,
int port,
byte byteDelimiter,
DeserializationSchema<RowData> deserializer) |
Modifier and Type | Method and Description |
---|---|
RowData |
InternalRowMergerFunction.eval(RowData r1,
RowData r2) |
Modifier and Type | Method and Description |
---|---|
RowData |
InternalRowMergerFunction.eval(RowData r1,
RowData r2) |
Modifier and Type | Method and Description |
---|---|
abstract CompletableFuture<Collection<RowData>> |
AsyncLookupFunction.asyncLookup(RowData keyRow)
Asynchronously lookup rows matching the lookup keys.
|
abstract Collection<RowData> |
LookupFunction.lookup(RowData keyRow)
Synchronously lookup rows matching the lookup keys.
|
Modifier and Type | Method and Description |
---|---|
abstract CompletableFuture<Collection<RowData>> |
AsyncLookupFunction.asyncLookup(RowData keyRow)
Asynchronously lookup rows matching the lookup keys.
|
abstract Collection<RowData> |
LookupFunction.lookup(RowData keyRow)
Synchronously lookup rows matching the lookup keys.
|
Modifier and Type | Method and Description |
---|---|
void |
AsyncLookupFunction.eval(CompletableFuture<Collection<RowData>> future,
Object... keys)
Invokes
AsyncLookupFunction.asyncLookup(org.apache.flink.table.data.RowData) and chains futures. |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
ResultSet.getData()
All the data in the current results.
|
Constructor and Description |
---|
ResultSet(ResultSet.ResultType resultType,
Long nextToken,
ResolvedSchema resultSchema,
List<RowData> data) |
Modifier and Type | Method and Description |
---|---|
Optional<List<RowData>> |
ResultStore.retrieveRecords() |
Constructor and Description |
---|
ResultFetcher(OperationHandle operationHandle,
ResolvedSchema resultSchema,
CloseableIterator<RowData> resultRows) |
ResultFetcher(OperationHandle operationHandle,
ResolvedSchema resultSchema,
List<RowData> rows) |
ResultStore(CloseableIterator<RowData> result,
int maxBufferSize) |
Modifier and Type | Method and Description |
---|---|
Transformation<RowData> |
TransformationScanProvider.createTransformation(ProviderContext providerContext)
Creates a
Transformation instance. |
Transformation<RowData> |
TransformationSinkProvider.Context.getInputTransformation()
Input transformation to transform.
|
Modifier and Type | Method and Description |
---|---|
String[] |
RowDataToStringConverterImpl.convert(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
protected Transformation<RowData> |
BatchExecLegacyTableSourceScan.createConversionTransformationIfNeeded(StreamExecutionEnvironment streamExecEnv,
ExecNodeConfig config,
ClassLoader classLoader,
Transformation<?> sourceTransform,
org.apache.calcite.rex.RexNode rowtimeExpression) |
Transformation<RowData> |
BatchExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
protected Transformation<RowData> |
BatchExecSortMergeJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecSortWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecSortAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecNestedLoopJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecWindowTableFunction.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecTableSourceScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecScriptTransform.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecMultipleInput.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecValues.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecOverAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecPythonGroupWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecPythonGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecLimit.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecRank.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecSort.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecExchange.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecHashWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecLegacyTableSourceScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecSortLimit.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecHashJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
Transformation<RowData> |
BatchExecLookupJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecBoundedStreamScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecHashAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecPythonOverAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
Modifier and Type | Method and Description |
---|---|
Transformation<RowData> |
BatchExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
Transformation<RowData> |
BatchExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
Modifier and Type | Method and Description |
---|---|
protected abstract Transformation<RowData> |
CommonExecLegacyTableSourceScan.createConversionTransformationIfNeeded(StreamExecutionEnvironment streamExecEnv,
ExecNodeConfig config,
ClassLoader classLoader,
Transformation<?> sourceTransform,
org.apache.calcite.rex.RexNode rowtimeExpression) |
protected abstract Transformation<RowData> |
CommonExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName)
Creates a
Transformation based on the given InputFormat . |
protected Transformation<RowData> |
CommonExecLookupJoin.createJoinTransformation(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config,
boolean upsertMaterialize,
boolean lookupKeyContainsPrimaryKey) |
protected Transformation<RowData> |
CommonExecTableSourceScan.createSourceFunctionTransformation(StreamExecutionEnvironment env,
SourceFunction<RowData> function,
boolean isBounded,
String operatorName,
TypeInformation<RowData> outputTypeInfo)
Adopted from
StreamExecutionEnvironment.addSource(SourceFunction, String,
TypeInformation) but with custom Boundedness . |
protected Transformation<RowData> |
CommonExecMatch.translateOrder(Transformation<RowData> inputTransform,
RowType inputRowType,
ExecNodeConfig config) |
static Tuple2<Pattern<RowData,RowData>,List<String>> |
CommonExecMatch.translatePattern(MatchSpec matchSpec,
ReadableConfig config,
ClassLoader classLoader,
org.apache.calcite.tools.RelBuilder relBuilder,
RowType inputRowType) |
static Tuple2<Pattern<RowData,RowData>,List<String>> |
CommonExecMatch.translatePattern(MatchSpec matchSpec,
ReadableConfig config,
ClassLoader classLoader,
org.apache.calcite.tools.RelBuilder relBuilder,
RowType inputRowType) |
protected Transformation<RowData> |
CommonExecLegacyTableSourceScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecUnion.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecCorrelate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecPythonCalc.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecExpand.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecTableSourceScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecCalc.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecWindowTableFunction.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecValues.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecPythonCorrelate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecMatch.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
Modifier and Type | Method and Description |
---|---|
protected Transformation<RowData> |
StreamExecLegacyTableSourceScan.createConversionTransformationIfNeeded(StreamExecutionEnvironment streamExecEnv,
ExecNodeConfig config,
ClassLoader classLoader,
Transformation<?> sourceTransform,
org.apache.calcite.rex.RexNode rowtimeExpression) |
Transformation<RowData> |
StreamExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
Transformation<RowData> |
StreamExecMatch.translateOrder(Transformation<RowData> inputTransform,
RowType inputRowType,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecMiniBatchAssigner.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecPythonGroupTableAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGroupTableAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecWindowDeduplicate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecIntervalJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecIncrementalGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecExchange.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecDropUpdateBefore.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGroupWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecLimit.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecOverAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecSort.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecMultipleInput.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecPythonOverAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecTemporalSort.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecRank.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecSortLimit.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecDeduplicate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecWindowRank.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecChangelogNormalize.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecPythonGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecPythonGroupWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
Transformation<RowData> |
StreamExecLookupJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecWatermarkAssigner.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecWindowJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecDataStreamScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecTemporalJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecLocalGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGlobalGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecLocalWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGlobalWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
Modifier and Type | Method and Description |
---|---|
Transformation<RowData> |
StreamExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
Transformation<RowData> |
StreamExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
Transformation<RowData> |
StreamExecMatch.translateOrder(Transformation<RowData> inputTransform,
RowType inputRowType,
ExecNodeConfig config) |
Modifier and Type | Method and Description |
---|---|
static RowDataKeySelector |
KeySelectorUtil.getRowDataSelector(ClassLoader classLoader,
int[] keyFields,
InternalTypeInfo<RowData> rowType) |
static RowDataKeySelector |
KeySelectorUtil.getRowDataSelector(ClassLoader classLoader,
int[] keyFields,
InternalTypeInfo<RowData> rowType,
Class<? extends RowData> outClass)
Create a RowDataKeySelector to extract keys from DataStream which type is
InternalTypeInfo of RowData . |
static RowDataKeySelector |
KeySelectorUtil.getRowDataSelector(ClassLoader classLoader,
int[] keyFields,
InternalTypeInfo<RowData> rowType,
Class<? extends RowData> outClass)
Create a RowDataKeySelector to extract keys from DataStream which type is
InternalTypeInfo of RowData . |
Modifier and Type | Method and Description |
---|---|
RowData |
ArrowReader.read(int rowId)
Read the specified row from underlying Arrow format data.
|
Modifier and Type | Method and Description |
---|---|
static ArrowWriter<RowData> |
ArrowUtils.createRowDataArrowWriter(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowWriter for the specified VectorSchemaRoot . |
Modifier and Type | Method and Description |
---|---|
RowData |
ArrowSerializer.read(int i) |
Modifier and Type | Method and Description |
---|---|
ArrowWriter<RowData> |
ArrowSerializer.createArrowWriter()
Creates an
ArrowWriter . |
Modifier and Type | Method and Description |
---|---|
void |
ArrowSerializer.write(RowData element) |
Modifier and Type | Method and Description |
---|---|
DataStream<RowData> |
ArrowTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
TypeInformation<RowData> |
ArrowSourceFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
void |
ArrowSourceFunction.run(SourceFunction.SourceContext<RowData> ctx) |
Modifier and Type | Method and Description |
---|---|
static BigIntWriter<RowData> |
BigIntWriter.forRow(org.apache.arrow.vector.BigIntVector bigIntVector) |
static BooleanWriter<RowData> |
BooleanWriter.forRow(org.apache.arrow.vector.BitVector bitVector) |
static DateWriter<RowData> |
DateWriter.forRow(org.apache.arrow.vector.DateDayVector dateDayVector) |
static DecimalWriter<RowData> |
DecimalWriter.forRow(org.apache.arrow.vector.DecimalVector decimalVector,
int precision,
int scale) |
static FloatWriter<RowData> |
FloatWriter.forRow(org.apache.arrow.vector.Float4Vector floatVector) |
static DoubleWriter<RowData> |
DoubleWriter.forRow(org.apache.arrow.vector.Float8Vector doubleVector) |
static IntWriter<RowData> |
IntWriter.forRow(org.apache.arrow.vector.IntVector intVector) |
static ArrayWriter<RowData> |
ArrayWriter.forRow(org.apache.arrow.vector.complex.ListVector listVector,
ArrowFieldWriter<ArrayData> elementWriter) |
static SmallIntWriter<RowData> |
SmallIntWriter.forRow(org.apache.arrow.vector.SmallIntVector intVector) |
static RowWriter<RowData> |
RowWriter.forRow(org.apache.arrow.vector.complex.StructVector structVector,
ArrowFieldWriter<RowData>[] fieldsWriters) |
static TinyIntWriter<RowData> |
TinyIntWriter.forRow(org.apache.arrow.vector.TinyIntVector tinyIntVector) |
static TimeWriter<RowData> |
TimeWriter.forRow(org.apache.arrow.vector.ValueVector valueVector) |
static TimestampWriter<RowData> |
TimestampWriter.forRow(org.apache.arrow.vector.ValueVector valueVector,
int precision) |
static VarBinaryWriter<RowData> |
VarBinaryWriter.forRow(org.apache.arrow.vector.VarBinaryVector varBinaryVector) |
static VarCharWriter<RowData> |
VarCharWriter.forRow(org.apache.arrow.vector.VarCharVector varCharVector) |
Modifier and Type | Method and Description |
---|---|
RowData |
ExecutionContext.currentKey() |
RowData |
ExecutionContextImpl.currentKey() |
Modifier and Type | Method and Description |
---|---|
void |
ExecutionContext.setCurrentKey(RowData key)
Sets current key.
|
void |
ExecutionContextImpl.setCurrentKey(RowData key) |
Modifier and Type | Method and Description |
---|---|
RowData |
LastValueAggFunction.createAccumulator() |
RowData |
FirstValueAggFunction.createAccumulator() |
Modifier and Type | Method and Description |
---|---|
void |
LastValueAggFunction.accumulate(RowData rowData,
Object value) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
Object value) |
void |
LastValueAggFunction.accumulate(RowData rowData,
Object value,
Long order) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
Object value,
Long order) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
StringData value) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
StringData value,
Long order) |
T |
LastValueAggFunction.getValue(RowData rowData) |
T |
FirstValueAggFunction.getValue(RowData acc) |
void |
LastValueAggFunction.resetAccumulator(RowData rowData) |
void |
FirstValueAggFunction.resetAccumulator(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Collection<RowData>> |
CachingAsyncLookupFunction.asyncLookup(RowData keyRow) |
Collection<RowData> |
CachingLookupFunction.lookup(RowData keyRow) |
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Collection<RowData>> |
CachingAsyncLookupFunction.asyncLookup(RowData keyRow) |
Collection<RowData> |
CachingLookupFunction.lookup(RowData keyRow) |
Modifier and Type | Field and Description |
---|---|
protected ConcurrentHashMap<RowData,Collection<RowData>> |
CacheLoader.cache |
protected ConcurrentHashMap<RowData,Collection<RowData>> |
CacheLoader.cache |
Modifier and Type | Method and Description |
---|---|
ConcurrentHashMap<RowData,Collection<RowData>> |
CacheLoader.getCache() |
ConcurrentHashMap<RowData,Collection<RowData>> |
CacheLoader.getCache() |
Collection<RowData> |
LookupFullCache.getIfPresent(RowData key) |
Collection<RowData> |
LookupFullCache.put(RowData key,
Collection<RowData> value) |
Modifier and Type | Method and Description |
---|---|
Collection<RowData> |
LookupFullCache.getIfPresent(RowData key) |
void |
LookupFullCache.invalidate(RowData key) |
Collection<RowData> |
LookupFullCache.put(RowData key,
Collection<RowData> value) |
Modifier and Type | Method and Description |
---|---|
Collection<RowData> |
LookupFullCache.put(RowData key,
Collection<RowData> value) |
Constructor and Description |
---|
InputFormatCacheLoader(InputFormat<RowData,?> initialInputFormat,
GenericRowDataKeySelector keySelector,
RowDataSerializer cacheEntriesSerializer) |
InputSplitCacheLoadTask(ConcurrentHashMap<RowData,Collection<RowData>> cache,
GenericRowDataKeySelector keySelector,
RowDataSerializer cacheEntriesSerializer,
InputFormat<RowData,InputSplit> inputFormat,
InputSplit inputSplit) |
InputSplitCacheLoadTask(ConcurrentHashMap<RowData,Collection<RowData>> cache,
GenericRowDataKeySelector keySelector,
RowDataSerializer cacheEntriesSerializer,
InputFormat<RowData,InputSplit> inputFormat,
InputSplit inputSplit) |
InputSplitCacheLoadTask(ConcurrentHashMap<RowData,Collection<RowData>> cache,
GenericRowDataKeySelector keySelector,
RowDataSerializer cacheEntriesSerializer,
InputFormat<RowData,InputSplit> inputFormat,
InputSplit inputSplit) |
Modifier and Type | Interface and Description |
---|---|
interface |
Projection<IN extends RowData,OUT extends RowData>
Interface for code generated projection, which will map a RowData to another one.
|
interface |
Projection<IN extends RowData,OUT extends RowData>
Interface for code generated projection, which will map a RowData to another one.
|
Modifier and Type | Method and Description |
---|---|
RowData |
NamespaceAggsHandleFunctionBase.createAccumulators()
Initializes the accumulators and save them to a accumulators row.
|
RowData |
AggsHandleFunctionBase.createAccumulators()
Initializes the accumulators and save them to a accumulators row.
|
RowData |
NamespaceAggsHandleFunctionBase.getAccumulators()
Gets the current accumulators (saved in a row) which contains the current aggregated results.
|
RowData |
AggsHandleFunctionBase.getAccumulators()
Gets the current accumulators (saved in a row) which contains the current aggregated results.
|
RowData |
AggsHandleFunction.getValue()
Gets the result of the aggregation from the current accumulators.
|
RowData |
NamespaceAggsHandleFunction.getValue(N namespace)
Gets the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
Modifier and Type | Method and Description |
---|---|
WatermarkGenerator<RowData> |
GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(WatermarkGeneratorSupplier.Context context) |
Modifier and Type | Method and Description |
---|---|
void |
NamespaceAggsHandleFunctionBase.accumulate(RowData inputRow)
Accumulates the input values to the accumulators.
|
void |
AggsHandleFunctionBase.accumulate(RowData input)
Accumulates the input values to the accumulators.
|
boolean |
JoinCondition.apply(RowData in1,
RowData in2) |
int |
RecordComparator.compare(RowData o1,
RowData o2) |
abstract Long |
WatermarkGenerator.currentWatermark(RowData row)
Returns the watermark for the current row or null if no watermark should be generated.
|
void |
TableAggsHandleFunction.emitValue(Collector<RowData> out,
RowData currentKey,
boolean isRetract)
Emit the result of the table aggregation through the collector.
|
void |
NamespaceTableAggsHandleFunction.emitValue(N namespace,
RowData key,
Collector<RowData> out)
Emits the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
boolean |
RecordEqualiser.equals(RowData row1,
RowData row2)
Returns
true if the rows are equal to each other and false otherwise. |
void |
NamespaceAggsHandleFunctionBase.merge(N namespace,
RowData otherAcc)
Merges the other accumulators into current accumulators.
|
void |
AggsHandleFunctionBase.merge(RowData accumulators)
Merges the other accumulators into current accumulators.
|
void |
GeneratedWatermarkGeneratorSupplier.DefaultWatermarkGenerator.onEvent(RowData event,
long eventTimestamp,
WatermarkOutput output) |
void |
NormalizedKeyComputer.putKey(RowData record,
MemorySegment target,
int offset)
Writes a normalized key for the given record into the target
MemorySegment . |
void |
NamespaceAggsHandleFunctionBase.retract(RowData inputRow)
Retracts the input values from the accumulators.
|
void |
AggsHandleFunctionBase.retract(RowData input)
Retracts the input values from the accumulators.
|
void |
NamespaceAggsHandleFunctionBase.setAccumulators(N namespace,
RowData accumulators)
Set the current accumulators (saved in a row) which contains the current aggregated results.
|
void |
AggsHandleFunctionBase.setAccumulators(RowData accumulators)
Set the current accumulators (saved in a row) which contains the current aggregated results.
|
Modifier and Type | Method and Description |
---|---|
void |
TableAggsHandleFunction.emitValue(Collector<RowData> out,
RowData currentKey,
boolean isRetract)
Emit the result of the table aggregation through the collector.
|
void |
NamespaceTableAggsHandleFunction.emitValue(N namespace,
RowData key,
Collector<RowData> out)
Emits the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
Modifier and Type | Class and Description |
---|---|
class |
WrappedRowIterator<T extends RowData>
Wrap
MutableObjectIterator to java RowIterator . |
Modifier and Type | Method and Description |
---|---|
RowData |
ProbeIterator.current() |
RowData |
LongHybridHashTable.getCurrentProbeRow() |
RowData |
BinaryHashTable.getCurrentProbeRow() |
Modifier and Type | Method and Description |
---|---|
abstract long |
LongHybridHashTable.getBuildLongKey(RowData row)
For code gen get build side long key.
|
abstract long |
LongHybridHashTable.getProbeLongKey(RowData row)
For code gen get probe side long key.
|
abstract BinaryRowData |
LongHybridHashTable.probeToBinary(RowData row)
For code gen probe side to BinaryRowData.
|
void |
BinaryHashTable.putBuildRow(RowData row)
Put a build side row to hash table.
|
void |
ProbeIterator.setInstance(RowData instance) |
boolean |
LongHybridHashTable.tryProbe(RowData record) |
boolean |
BinaryHashTable.tryProbe(RowData record)
Find matched build side rows for a probe row.
|
Constructor and Description |
---|
BinaryHashTable(Configuration conf,
Object owner,
AbstractRowDataSerializer buildSideSerializer,
AbstractRowDataSerializer probeSideSerializer,
Projection<RowData,BinaryRowData> buildSideProjection,
Projection<RowData,BinaryRowData> probeSideProjection,
MemoryManager memManager,
long reservedMemorySize,
IOManager ioManager,
int avgRecordLen,
long buildRowCount,
boolean useBloomFilters,
HashJoinType type,
JoinCondition condFunc,
boolean reverseJoin,
boolean[] filterNulls,
boolean tryDistinctBuildRow) |
BinaryHashTable(Configuration conf,
Object owner,
AbstractRowDataSerializer buildSideSerializer,
AbstractRowDataSerializer probeSideSerializer,
Projection<RowData,BinaryRowData> buildSideProjection,
Projection<RowData,BinaryRowData> probeSideProjection,
MemoryManager memManager,
long reservedMemorySize,
IOManager ioManager,
int avgRecordLen,
long buildRowCount,
boolean useBloomFilters,
HashJoinType type,
JoinCondition condFunc,
boolean reverseJoin,
boolean[] filterNulls,
boolean tryDistinctBuildRow) |
Modifier and Type | Method and Description |
---|---|
RowData |
GenericRowDataKeySelector.getKey(RowData value) |
RowData |
BinaryRowDataKeySelector.getKey(RowData value) |
RowData |
EmptyRowDataKeySelector.getKey(RowData value) |
Modifier and Type | Method and Description |
---|---|
InternalTypeInfo<RowData> |
GenericRowDataKeySelector.getProducedType() |
InternalTypeInfo<RowData> |
BinaryRowDataKeySelector.getProducedType() |
InternalTypeInfo<RowData> |
EmptyRowDataKeySelector.getProducedType() |
InternalTypeInfo<RowData> |
RowDataKeySelector.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
GenericRowDataKeySelector.getKey(RowData value) |
RowData |
BinaryRowDataKeySelector.getKey(RowData value) |
RowData |
EmptyRowDataKeySelector.getKey(RowData value) |
Constructor and Description |
---|
BinaryRowDataKeySelector(InternalTypeInfo<RowData> keyRowType,
GeneratedProjection generatedProjection) |
GenericRowDataKeySelector(InternalTypeInfo<RowData> keyRowType,
RowDataSerializer keySerializer,
GeneratedProjection generatedProjection) |
Modifier and Type | Method and Description |
---|---|
RowData |
MiniBatchGlobalGroupAggFunction.addInput(RowData previousAcc,
RowData input)
The
previousAcc is accumulator, but input is a row in <key, accumulator>
schema, the specific generated MiniBatchGlobalGroupAggFunction.localAgg will project the input to
accumulator in merge method. |
RowData |
MiniBatchLocalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchIncrementalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
RowData |
MiniBatchGlobalGroupAggFunction.addInput(RowData previousAcc,
RowData input)
The
previousAcc is accumulator, but input is a row in <key, accumulator>
schema, the specific generated MiniBatchGlobalGroupAggFunction.localAgg will project the input to
accumulator in merge method. |
RowData |
MiniBatchLocalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchIncrementalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
void |
GroupTableAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
GroupAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
abstract boolean |
RecordCounter.recordCountIsZero(RowData acc)
We store the counter in the accumulator.
|
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
GroupTableAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
GroupAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
MiniBatchIncrementalGroupAggFunction(GeneratedAggsHandleFunction genPartialAggsHandler,
GeneratedAggsHandleFunction genFinalAggsHandler,
KeySelector<RowData,RowData> finalKeySelector,
long stateRetentionTime) |
MiniBatchIncrementalGroupAggFunction(GeneratedAggsHandleFunction genPartialAggsHandler,
GeneratedAggsHandleFunction genFinalAggsHandler,
KeySelector<RowData,RowData> finalKeySelector,
long stateRetentionTime) |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
LocalSlicingWindowAggOperator.collector
This is used for emitting elements with a given timestamp.
|
Modifier and Type | Method and Description |
---|---|
SlicingWindowOperator<RowData,?> |
SlicingWindowAggOperatorBuilder.build() |
Modifier and Type | Method and Description |
---|---|
void |
WindowBuffer.addElement(RowData key,
long window,
RowData element)
Adds an element with associated key into the buffer.
|
void |
RecordsWindowBuffer.addElement(RowData key,
long sliceEnd,
RowData element) |
Modifier and Type | Method and Description |
---|---|
WindowBuffer |
WindowBuffer.LocalFactory.create(Object operatorOwner,
MemoryManager memoryManager,
long memorySize,
RuntimeContext runtimeContext,
Collector<RowData> collector,
java.time.ZoneId shiftTimeZone)
Creates a
WindowBuffer for local window that buffers elements in memory before
flushing. |
WindowBuffer |
RecordsWindowBuffer.LocalFactory.create(Object operatorOwner,
MemoryManager memoryManager,
long memorySize,
RuntimeContext runtimeContext,
Collector<RowData> collector,
java.time.ZoneId shiftTimeZone) |
WindowBuffer |
WindowBuffer.Factory.create(Object operatorOwner,
MemoryManager memoryManager,
long memorySize,
RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime,
java.time.ZoneId shiftTimeZone)
Creates a
WindowBuffer that buffers elements in memory before flushing. |
WindowBuffer |
RecordsWindowBuffer.Factory.create(Object operatorOwner,
MemoryManager memoryManager,
long memorySize,
RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime,
java.time.ZoneId shiftTimeZone) |
Modifier and Type | Method and Description |
---|---|
void |
AggCombiner.combine(WindowKey windowKey,
Iterator<RowData> records) |
void |
LocalAggCombiner.combine(WindowKey windowKey,
Iterator<RowData> records) |
void |
GlobalAggCombiner.combine(WindowKey windowKey,
Iterator<RowData> localAccs) |
RecordsCombiner |
LocalAggCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
Collector<RowData> collector) |
RecordsCombiner |
AggCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime) |
RecordsCombiner |
GlobalAggCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime) |
Constructor and Description |
---|
LocalAggCombiner(NamespaceAggsHandleFunction<Long> aggregator,
Collector<RowData> collector) |
Modifier and Type | Field and Description |
---|---|
protected TypeSerializer<RowData> |
AbstractWindowAggProcessor.accSerializer |
Modifier and Type | Method and Description |
---|---|
protected void |
AbstractWindowAggProcessor.collect(RowData aggResult) |
boolean |
AbstractWindowAggProcessor.processElement(RowData key,
RowData element) |
Constructor and Description |
---|
AbstractWindowAggProcessor(GeneratedNamespaceAggsHandleFunction<Long> genAggsHandler,
WindowBuffer.Factory bufferFactory,
SliceAssigner sliceAssigner,
TypeSerializer<RowData> accSerializer,
java.time.ZoneId shiftTimeZone) |
SliceSharedWindowAggProcessor(GeneratedNamespaceAggsHandleFunction<Long> genAggsHandler,
WindowBuffer.Factory bufferFactory,
SliceSharedAssigner sliceAssigner,
TypeSerializer<RowData> accSerializer,
int indexOfCountStar,
java.time.ZoneId shiftTimeZone) |
SliceUnsharedWindowAggProcessor(GeneratedNamespaceAggsHandleFunction<Long> genAggsHandler,
WindowBuffer.Factory windowBufferFactory,
SliceUnsharedAssigner sliceAssigner,
TypeSerializer<RowData> accSerializer,
java.time.ZoneId shiftTimeZone) |
Modifier and Type | Method and Description |
---|---|
RowData |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.addInput(RowData value,
RowData input) |
RowData |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.addInput(RowData value,
RowData input) |
RowData |
RowTimeMiniBatchLatestChangeDeduplicateFunction.addInput(RowData value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
RowTimeMiniBatchDeduplicateFunction.addInput(List<RowData> value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
RowTimeMiniBatchDeduplicateFunction.addInput(List<RowData> value,
RowData input) |
RowData |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.addInput(RowData value,
RowData input) |
RowData |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.addInput(RowData value,
RowData input) |
RowData |
RowTimeMiniBatchLatestChangeDeduplicateFunction.addInput(RowData value,
RowData input) |
static void |
RowTimeDeduplicateFunction.deduplicateOnRowTime(ValueState<RowData> state,
RowData currentRow,
Collector<RowData> out,
boolean generateUpdateBefore,
boolean generateInsert,
int rowtimeIndex,
boolean keepLastRow)
Processes element to deduplicate on keys with row time semantic, sends current element if it
is last or first row, retracts previous element if needed.
|
static boolean |
DeduplicateFunctionHelper.isDuplicate(RowData preRow,
RowData currentRow,
int rowtimeIndex,
boolean keepLastRow)
Returns current row is duplicate row or not compared to previous row.
|
void |
RowTimeDeduplicateFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepLastRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepFirstRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
RowTimeMiniBatchDeduplicateFunction.addInput(List<RowData> value,
RowData input) |
static void |
RowTimeDeduplicateFunction.deduplicateOnRowTime(ValueState<RowData> state,
RowData currentRow,
Collector<RowData> out,
boolean generateUpdateBefore,
boolean generateInsert,
int rowtimeIndex,
boolean keepLastRow)
Processes element to deduplicate on keys with row time semantic, sends current element if it
is last or first row, retracts previous element if needed.
|
static void |
RowTimeDeduplicateFunction.deduplicateOnRowTime(ValueState<RowData> state,
RowData currentRow,
Collector<RowData> out,
boolean generateUpdateBefore,
boolean generateInsert,
int rowtimeIndex,
boolean keepLastRow)
Processes element to deduplicate on keys with row time semantic, sends current element if it
is last or first row, retracts previous element if needed.
|
void |
RowTimeMiniBatchDeduplicateFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchDeduplicateFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchDeduplicateFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchLatestChangeDeduplicateFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchLatestChangeDeduplicateFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchLatestChangeDeduplicateFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
RowTimeDeduplicateFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepLastRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepFirstRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
ProcTimeDeduplicateKeepLastRowFunction(InternalTypeInfo<RowData> typeInfo,
long stateRetentionTime,
boolean generateUpdateBefore,
boolean generateInsert,
boolean inputInsertOnly,
GeneratedRecordEqualiser genRecordEqualiser) |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction(TypeSerializer<RowData> serializer,
long stateRetentionTime) |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long stateRetentionTime,
boolean generateUpdateBefore,
boolean generateInsert,
boolean inputInsertOnly,
GeneratedRecordEqualiser genRecordEqualiser) |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long stateRetentionTime,
boolean generateUpdateBefore,
boolean generateInsert,
boolean inputInsertOnly,
GeneratedRecordEqualiser genRecordEqualiser) |
RowTimeDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchLatestChangeDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchLatestChangeDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
Modifier and Type | Method and Description |
---|---|
SlicingWindowOperator<RowData,?> |
RowTimeWindowDeduplicateOperatorBuilder.build() |
Modifier and Type | Method and Description |
---|---|
RowTimeWindowDeduplicateOperatorBuilder |
RowTimeWindowDeduplicateOperatorBuilder.inputSerializer(AbstractRowDataSerializer<RowData> inputSerializer) |
RowTimeWindowDeduplicateOperatorBuilder |
RowTimeWindowDeduplicateOperatorBuilder.keySerializer(PagedTypeSerializer<RowData> keySerializer) |
Modifier and Type | Method and Description |
---|---|
void |
RowTimeDeduplicateRecordsCombiner.combine(WindowKey windowKey,
Iterator<RowData> records) |
RecordsCombiner |
RowTimeDeduplicateRecordsCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime) |
Constructor and Description |
---|
Factory(TypeSerializer<RowData> recordSerializer,
int rowtimeIndex,
boolean keepLastRow) |
RowTimeDeduplicateRecordsCombiner(WindowTimerService<Long> timerService,
StateKeyContext keyContext,
WindowValueState<Long> dataState,
int rowtimeIndex,
boolean keepLastRow,
TypeSerializer<RowData> recordSerializer) |
Modifier and Type | Method and Description |
---|---|
boolean |
RowTimeWindowDeduplicateProcessor.processElement(RowData key,
RowData element) |
Constructor and Description |
---|
RowTimeWindowDeduplicateProcessor(TypeSerializer<RowData> inputSerializer,
WindowBuffer.Factory bufferFactory,
int windowEndIndex,
java.time.ZoneId shiftTimeZone) |
Modifier and Type | Method and Description |
---|---|
void |
DynamicFilteringDataCollectorOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
void |
HiveScriptTransformOperator.processElement(StreamRecord<RowData> element) |
Constructor and Description |
---|
HiveScriptTransformOutReadThread(org.apache.hadoop.hive.ql.exec.RecordReader recordReader,
LogicalType outputType,
org.apache.hadoop.hive.serde2.AbstractSerDe outSerDe,
org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector structObjectInspector,
StreamRecordCollector<RowData> collector) |
Modifier and Type | Method and Description |
---|---|
RowData |
SortMergeJoinIterator.getProbeRow() |
RowData |
OuterJoinPaddingUtil.padLeft(RowData leftRow)
Returns a padding result with the given left row.
|
RowData |
OuterJoinPaddingUtil.padRight(RowData rightRow)
Returns a padding result with the given right row.
|
Modifier and Type | Method and Description |
---|---|
boolean |
JoinConditionWithNullFilters.apply(RowData left,
RowData right) |
abstract void |
HashJoinOperator.join(RowIterator<BinaryRowData> buildIter,
RowData probeRow) |
RowData |
OuterJoinPaddingUtil.padLeft(RowData leftRow)
Returns a padding result with the given left row.
|
RowData |
OuterJoinPaddingUtil.padRight(RowData rightRow)
Returns a padding result with the given right row.
|
void |
SortMergeJoinFunction.processElement1(RowData element) |
void |
SortMergeJoinFunction.processElement2(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
HashJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
SortMergeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
HashJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
SortMergeJoinOperator.processElement2(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
RowData |
PaddingRightMapFunction.map(RowData value) |
RowData |
PaddingLeftMapFunction.map(RowData value) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<RowData> |
PaddingRightMapFunction.getProducedType() |
TypeInformation<RowData> |
PaddingLeftMapFunction.getProducedType() |
TypeInformation<RowData> |
FilterAllFlatMapFunction.getProducedType() |
TypeInformation<RowData> |
IntervalJoinFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
void |
FilterAllFlatMapFunction.flatMap(RowData value,
Collector<RowData> out) |
void |
IntervalJoinFunction.join(RowData first,
RowData second,
Collector<RowData> out) |
RowData |
PaddingRightMapFunction.map(RowData value) |
RowData |
PaddingLeftMapFunction.map(RowData value) |
void |
IntervalJoinFunction.setJoinKey(RowData currentKey) |
Modifier and Type | Method and Description |
---|---|
void |
FilterAllFlatMapFunction.flatMap(RowData value,
Collector<RowData> out) |
void |
IntervalJoinFunction.join(RowData first,
RowData second,
Collector<RowData> out) |
Constructor and Description |
---|
FilterAllFlatMapFunction(InternalTypeInfo<RowData> inputTypeInfo) |
IntervalJoinFunction(GeneratedJoinCondition joinCondition,
InternalTypeInfo<RowData> returnTypeInfo,
boolean[] filterNulls) |
PaddingLeftMapFunction(OuterJoinPaddingUtil paddingUtil,
InternalTypeInfo<RowData> returnType) |
PaddingRightMapFunction(OuterJoinPaddingUtil paddingUtil,
InternalTypeInfo<RowData> returnType) |
ProcTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
IntervalJoinFunction genJoinFunc) |
ProcTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
IntervalJoinFunction genJoinFunc) |
RowTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
long allowedLateness,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
IntervalJoinFunction joinFunc,
int leftTimeIdx,
int rightTimeIdx) |
RowTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
long allowedLateness,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
IntervalJoinFunction joinFunc,
int leftTimeIdx,
int rightTimeIdx) |
Modifier and Type | Field and Description |
---|---|
protected ListenableCollector<RowData> |
LookupJoinRunner.collector |
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Collection<RowData>> |
RetryableAsyncLookupFunctionDelegator.asyncLookup(RowData keyRow) |
TableFunctionResultFuture<RowData> |
AsyncLookupJoinWithCalcRunner.createFetcherResultFuture(Configuration parameters) |
TableFunctionResultFuture<RowData> |
AsyncLookupJoinRunner.createFetcherResultFuture(Configuration parameters) |
Collector<RowData> |
LookupJoinRunner.getFetcherCollector() |
Collector<RowData> |
LookupJoinWithCalcRunner.getFetcherCollector() |
AsyncRetryPredicate<RowData> |
ResultRetryStrategy.getRetryPredicate() |
Collection<RowData> |
RetryableLookupFunctionDelegator.lookup(RowData keyRow) |
Modifier and Type | Method and Description |
---|---|
void |
AsyncLookupJoinRunner.asyncInvoke(RowData input,
ResultFuture<RowData> resultFuture) |
CompletableFuture<Collection<RowData>> |
RetryableAsyncLookupFunctionDelegator.asyncLookup(RowData keyRow) |
void |
LookupJoinRunner.doFetch(RowData in) |
Collection<RowData> |
RetryableLookupFunctionDelegator.lookup(RowData keyRow) |
void |
LookupJoinRunner.padNullForLeftJoin(RowData in,
Collector<RowData> out) |
void |
LookupJoinRunner.prepareCollector(RowData in,
Collector<RowData> out) |
void |
KeyedLookupJoinWrapper.processElement(RowData in,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
LookupJoinRunner.processElement(RowData in,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
void |
AsyncLookupJoinRunner.asyncInvoke(RowData input,
ResultFuture<RowData> resultFuture) |
static ResultRetryStrategy |
ResultRetryStrategy.fixedDelayRetry(int maxAttempts,
long backoffTimeMillis,
java.util.function.Predicate<Collection<RowData>> resultPredicate)
Create a fixed-delay retry strategy by given params.
|
void |
LookupJoinRunner.padNullForLeftJoin(RowData in,
Collector<RowData> out) |
void |
LookupJoinRunner.prepareCollector(RowData in,
Collector<RowData> out) |
void |
KeyedLookupJoinWrapper.processElement(RowData in,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
LookupJoinRunner.processElement(RowData in,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Field and Description |
---|---|
RowData |
AbstractStreamingJoinOperator.OuterRecord.record |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
AbstractStreamingJoinOperator.collector |
protected InternalTypeInfo<RowData> |
AbstractStreamingJoinOperator.leftType |
protected InternalTypeInfo<RowData> |
AbstractStreamingJoinOperator.rightType |
Modifier and Type | Method and Description |
---|---|
Iterable<RowData> |
AbstractStreamingJoinOperator.AssociatedRecords.getRecords()
Gets the iterable of records.
|
Modifier and Type | Method and Description |
---|---|
static AbstractStreamingJoinOperator.AssociatedRecords |
AbstractStreamingJoinOperator.AssociatedRecords.of(RowData input,
boolean inputIsLeft,
JoinRecordStateView otherSideStateView,
JoinCondition condition)
Creates an
AbstractStreamingJoinOperator.AssociatedRecords which represents the records associated to the input
row. |
Modifier and Type | Method and Description |
---|---|
void |
StreamingJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
StreamingSemiAntiJoinOperator.processElement1(StreamRecord<RowData> element)
Process an input element and output incremental joined records, retraction messages will be
sent in some scenarios.
|
void |
StreamingJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
StreamingSemiAntiJoinOperator.processElement2(StreamRecord<RowData> element)
Process an input element and output incremental joined records, retraction messages will be
sent in some scenarios.
|
Constructor and Description |
---|
AbstractStreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
AbstractStreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean leftIsOuter,
boolean rightIsOuter,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean leftIsOuter,
boolean rightIsOuter,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingSemiAntiJoinOperator(boolean isAntiJoin,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingSemiAntiJoinOperator(boolean isAntiJoin,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
Modifier and Type | Method and Description |
---|---|
Iterable<RowData> |
JoinRecordStateView.getRecords()
Gets all the records under the current context (i.e.
|
Iterable<Tuple2<RowData,Integer>> |
OuterJoinRecordStateView.getRecordsAndNumOfAssociations()
Gets all the records and number of associations under the current context (i.e.
|
KeySelector<RowData,RowData> |
JoinInputSideSpec.getUniqueKeySelector()
Returns the
KeySelector to extract unique key from the input row. |
KeySelector<RowData,RowData> |
JoinInputSideSpec.getUniqueKeySelector()
Returns the
KeySelector to extract unique key from the input row. |
InternalTypeInfo<RowData> |
JoinInputSideSpec.getUniqueKeyType()
Returns the
TypeInformation of the unique key. |
Modifier and Type | Method and Description |
---|---|
void |
JoinRecordStateView.addRecord(RowData record)
Add a new record to the state view.
|
void |
OuterJoinRecordStateView.addRecord(RowData record,
int numOfAssociations)
Adds a new record with the number of associations to the state view.
|
void |
JoinRecordStateView.retractRecord(RowData record)
Retract the record from the state view.
|
void |
OuterJoinRecordStateView.updateNumOfAssociations(RowData record,
int numOfAssociations)
Updates the number of associations belongs to the record.
|
Modifier and Type | Method and Description |
---|---|
static JoinRecordStateView |
JoinRecordStateViews.create(RuntimeContext ctx,
String stateName,
JoinInputSideSpec inputSideSpec,
InternalTypeInfo<RowData> recordType,
long retentionTime)
Creates a
JoinRecordStateView depends on JoinInputSideSpec . |
static OuterJoinRecordStateView |
OuterJoinRecordStateViews.create(RuntimeContext ctx,
String stateName,
JoinInputSideSpec inputSideSpec,
InternalTypeInfo<RowData> recordType,
long retentionTime)
Creates a
OuterJoinRecordStateView depends on JoinInputSideSpec . |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
Modifier and Type | Method and Description |
---|---|
void |
TemporalRowTimeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
TemporalProcessTimeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
TemporalRowTimeJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
TemporalProcessTimeJoinOperator.processElement2(StreamRecord<RowData> element) |
Constructor and Description |
---|
TemporalProcessTimeJoinOperator(InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
long minRetentionTime,
long maxRetentionTime,
boolean isLeftOuterJoin) |
TemporalRowTimeJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
int leftTimeAttribute,
int rightTimeAttribute,
long minRetentionTime,
long maxRetentionTime,
boolean isLeftOuterJoin) |
TemporalRowTimeJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
int leftTimeAttribute,
int rightTimeAttribute,
long minRetentionTime,
long maxRetentionTime,
boolean isLeftOuterJoin) |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
WindowJoinOperator.collector
This is used for emitting elements with a given timestamp.
|
Modifier and Type | Method and Description |
---|---|
abstract void |
WindowJoinOperator.join(Iterable<RowData> leftRecords,
Iterable<RowData> rightRecords) |
abstract void |
WindowJoinOperator.join(Iterable<RowData> leftRecords,
Iterable<RowData> rightRecords) |
WindowJoinOperatorBuilder |
WindowJoinOperatorBuilder.leftSerializer(TypeSerializer<RowData> leftSerializer) |
void |
WindowJoinOperator.onEventTime(InternalTimer<RowData,Long> timer) |
void |
WindowJoinOperator.onProcessingTime(InternalTimer<RowData,Long> timer) |
void |
WindowJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
WindowJoinOperator.processElement2(StreamRecord<RowData> element) |
WindowJoinOperatorBuilder |
WindowJoinOperatorBuilder.rightSerializer(TypeSerializer<RowData> rightSerializer) |
Modifier and Type | Method and Description |
---|---|
int |
RowDataEventComparator.compare(RowData row1,
RowData row2) |
boolean |
IterativeConditionRunner.filter(RowData value,
IterativeCondition.Context<RowData> ctx) |
Modifier and Type | Method and Description |
---|---|
boolean |
IterativeConditionRunner.filter(RowData value,
IterativeCondition.Context<RowData> ctx) |
void |
PatternProcessFunctionRunner.processMatch(Map<String,List<RowData>> match,
PatternProcessFunction.Context ctx,
Collector<RowData> out) |
void |
PatternProcessFunctionRunner.processMatch(Map<String,List<RowData>> match,
PatternProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
IterativeConditionRunner(GeneratedFunction<RichIterativeCondition<RowData>> generatedFunction) |
PatternProcessFunctionRunner(GeneratedFunction<PatternProcessFunction<RowData,RowData>> generatedFunction) |
PatternProcessFunctionRunner(GeneratedFunction<PatternProcessFunction<RowData,RowData>> generatedFunction) |
Modifier and Type | Method and Description |
---|---|
boolean |
DropUpdateBeforeFunction.filter(RowData value) |
Modifier and Type | Class and Description |
---|---|
class |
TableOperatorWrapper<OP extends StreamOperator<RowData>>
This class handles the close, endInput and other related logic of a
StreamOperator . |
Modifier and Type | Method and Description |
---|---|
<T extends StreamOperator<RowData>> |
BatchMultipleInputStreamOperatorFactory.createStreamOperator(StreamOperatorParameters<RowData> parameters) |
Modifier and Type | Method and Description |
---|---|
void |
TableOperatorWrapper.createOperator(StreamOperatorParameters<RowData> parameters) |
protected StreamConfig |
MultipleInputStreamOperatorBase.createStreamConfig(StreamOperatorParameters<RowData> multipleInputOperatorParameters,
TableOperatorWrapper<?> wrapper) |
protected StreamConfig |
BatchMultipleInputStreamOperator.createStreamConfig(StreamOperatorParameters<RowData> multipleInputOperatorParameters,
TableOperatorWrapper<?> wrapper) |
<T extends StreamOperator<RowData>> |
BatchMultipleInputStreamOperatorFactory.createStreamOperator(StreamOperatorParameters<RowData> parameters) |
Constructor and Description |
---|
BatchMultipleInputStreamOperator(StreamOperatorParameters<RowData> parameters,
List<InputSpec> inputSpecs,
List<TableOperatorWrapper<?>> headWrapper,
TableOperatorWrapper<?> tailWrapper) |
MultipleInputStreamOperatorBase(StreamOperatorParameters<RowData> parameters,
List<InputSpec> inputSpecs,
List<TableOperatorWrapper<?>> headWrappers,
TableOperatorWrapper<?> tailWrapper) |
TableOperatorWrapper(StreamOperatorFactory<RowData> factory,
String operatorName,
List<TypeInformation<?>> allInputTypes,
TypeInformation<?> outputType) |
Modifier and Type | Method and Description |
---|---|
void |
FirstInputOfTwoInput.processElement(StreamRecord<RowData> element) |
void |
OneInput.processElement(StreamRecord<RowData> element) |
void |
SecondInputOfTwoInput.processElement(StreamRecord<RowData> element) |
void |
InputBase.setKeyContextElement(StreamRecord<RowData> record) |
Constructor and Description |
---|
FirstInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
FirstInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
FirstInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
OneInput(OneInputStreamOperator<RowData,RowData> operator) |
OneInput(OneInputStreamOperator<RowData,RowData> operator) |
SecondInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
SecondInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
SecondInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
Modifier and Type | Method and Description |
---|---|
void |
OneInputStreamOperatorOutput.collect(StreamRecord<RowData> record) |
void |
CopyingBroadcastingOutput.collect(StreamRecord<RowData> record) |
void |
FirstInputOfTwoInputStreamOperatorOutput.collect(StreamRecord<RowData> record) |
void |
CopyingSecondInputOfTwoInputStreamOperatorOutput.collect(StreamRecord<RowData> record) |
void |
SecondInputOfTwoInputStreamOperatorOutput.collect(StreamRecord<RowData> record) |
void |
BroadcastingOutput.collect(StreamRecord<RowData> record) |
Modifier and Type | Method and Description |
---|---|
void |
RowTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AbstractRowTimeUnboundedPrecedingOver.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out)
Puts an element from the input stream into state if it is not late.
|
void |
ProcTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeUnboundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
void |
RowTimeRowsBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
RowTimeRangeBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
AbstractRowTimeUnboundedPrecedingOver.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
ProcTimeRangeBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
ProcTimeRowsBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
RowTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AbstractRowTimeUnboundedPrecedingOver.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out)
Puts an element from the input stream into state if it is not late.
|
void |
ProcTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeUnboundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
BufferDataOverWindowOperator.processElement(StreamRecord<RowData> element) |
void |
NonBufferOverWindowOperator.processElement(StreamRecord<RowData> element) |
void |
RowTimeRangeUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
void |
RowTimeRangeUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
protected abstract void |
AbstractRowTimeUnboundedPrecedingOver.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out)
Process the same timestamp datas, the mechanism is different between rows and range window.
|
protected abstract void |
AbstractRowTimeUnboundedPrecedingOver.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out)
Process the same timestamp datas, the mechanism is different between rows and range window.
|
void |
RowTimeRowsUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
void |
RowTimeRowsUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
RowData |
RangeSlidingOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
UnboundedOverWindowFrame.process(int index,
RowData current) |
RowData |
OverWindowFrame.process(int index,
RowData current)
return the ACC of the window frame.
|
RowData |
RowSlidingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
InsensitiveOverFrame.process(int index,
RowData current) |
RowData |
OffsetOverFrame.process(int index,
RowData current) |
Modifier and Type | Method and Description |
---|---|
long |
OffsetOverFrame.CalcOffsetFunc.calc(RowData row) |
RowData |
RangeSlidingOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
UnboundedOverWindowFrame.process(int index,
RowData current) |
RowData |
OverWindowFrame.process(int index,
RowData current)
return the ACC of the window frame.
|
RowData |
RowSlidingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
InsensitiveOverFrame.process(int index,
RowData current) |
RowData |
OffsetOverFrame.process(int index,
RowData current) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractPythonStreamGroupAggregateOperator.processElementInternal(RowData value) |
abstract void |
AbstractPythonStreamAggregateOperator.processElementInternal(RowData value) |
void |
PythonStreamGroupWindowAggregateOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractPythonStreamGroupAggregateOperator.onEventTime(InternalTimer<RowData,VoidNamespace> timer)
Invoked when an event-time timer fires.
|
void |
AbstractPythonStreamGroupAggregateOperator.onProcessingTime(InternalTimer<RowData,VoidNamespace> timer)
Invoked when a processing-time timer fires.
|
void |
AbstractPythonStreamAggregateOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractArrowPythonAggregateFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractArrowPythonAggregateFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractArrowPythonAggregateFunctionOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
void |
BatchArrowPythonOverWindowAggregateFunctionOperator.bufferInput(RowData input) |
void |
BatchArrowPythonGroupWindowAggregateFunctionOperator.bufferInput(RowData input) |
void |
BatchArrowPythonGroupAggregateFunctionOperator.bufferInput(RowData input) |
void |
BatchArrowPythonOverWindowAggregateFunctionOperator.processElementInternal(RowData value) |
void |
BatchArrowPythonGroupWindowAggregateFunctionOperator.processElementInternal(RowData value) |
void |
BatchArrowPythonGroupAggregateFunctionOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
void |
StreamArrowPythonProcTimeBoundedRangeOperator.bufferInput(RowData input) |
void |
StreamArrowPythonRowTimeBoundedRangeOperator.bufferInput(RowData input) |
void |
StreamArrowPythonGroupWindowAggregateFunctionOperator.bufferInput(RowData input) |
void |
StreamArrowPythonRowTimeBoundedRowsOperator.bufferInput(RowData input) |
void |
StreamArrowPythonProcTimeBoundedRowsOperator.bufferInput(RowData input) |
void |
StreamArrowPythonGroupWindowAggregateFunctionOperator.processElementInternal(RowData value) |
void |
AbstractStreamArrowPythonOverWindowAggregateFunctionOperator.processElementInternal(RowData value) |
void |
StreamArrowPythonProcTimeBoundedRowsOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractPythonScalarFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractPythonScalarFunctionOperator.bufferInput(RowData input) |
RowData |
AbstractPythonScalarFunctionOperator.getFunctionInput(RowData element) |
void |
PythonScalarFunctionOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
void |
EmbeddedPythonScalarFunctionOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
void |
ArrowPythonScalarFunctionOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
RowData |
PythonTableFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
PythonTableFunctionOperator.bufferInput(RowData input) |
RowData |
PythonTableFunctionOperator.getFunctionInput(RowData element) |
void |
PythonTableFunctionOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
void |
EmbeddedPythonTableFunctionOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
void |
StreamRecordRowDataWrappingCollector.collect(RowData record) |
Constructor and Description |
---|
StreamRecordRowDataWrappingCollector(Collector<StreamRecord<RowData>> out) |
Modifier and Type | Field and Description |
---|---|
protected InternalTypeInfo<RowData> |
AbstractTopNFunction.inputRowType |
protected Comparator<RowData> |
AbstractTopNFunction.sortKeyComparator |
protected KeySelector<RowData,RowData> |
AbstractTopNFunction.sortKeySelector |
protected KeySelector<RowData,RowData> |
AbstractTopNFunction.sortKeySelector |
Modifier and Type | Method and Description |
---|---|
RowData |
TopNBuffer.getElement(int rank)
Gets record which rank is given value.
|
RowData |
TopNBuffer.lastElement()
Returns the last record of the last Entry in the buffer.
|
RowData |
TopNBuffer.removeLast()
Removes the last record of the last Entry in the buffer.
|
Modifier and Type | Method and Description |
---|---|
Set<Map.Entry<RowData,Collection<RowData>>> |
TopNBuffer.entrySet()
Returns a
Set view of the mappings contained in the buffer. |
Set<Map.Entry<RowData,Collection<RowData>>> |
TopNBuffer.entrySet()
Returns a
Set view of the mappings contained in the buffer. |
Collection<RowData> |
TopNBuffer.get(RowData sortKey)
Gets the record list from the buffer under the sortKey.
|
Comparator<RowData> |
TopNBuffer.getSortKeyComparator()
Gets sort key comparator used by buffer.
|
Map.Entry<RowData,Collection<RowData>> |
TopNBuffer.lastEntry()
Returns the last Entry in the buffer.
|
Map.Entry<RowData,Collection<RowData>> |
TopNBuffer.lastEntry()
Returns the last Entry in the buffer.
|
Modifier and Type | Method and Description |
---|---|
boolean |
TopNBuffer.checkSortKeyInBufferRange(RowData sortKey,
long topNum)
Checks whether the record should be put into the buffer.
|
protected boolean |
AbstractTopNFunction.checkSortKeyInBufferRange(RowData sortKey,
TopNBuffer buffer)
Checks whether the record should be put into the buffer.
|
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow,
long rank) |
int |
ComparableRecordComparator.compare(RowData o1,
RowData o2) |
boolean |
TopNBuffer.containsKey(RowData key)
Returns
true if the buffer contains a mapping for the specified key. |
Collection<RowData> |
TopNBuffer.get(RowData sortKey)
Gets the record list from the buffer under the sortKey.
|
protected long |
AbstractTopNFunction.initRankEnd(RowData row)
Initialize rank end.
|
void |
RetractableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AppendOnlyTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
AppendOnlyFirstNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
UpdatableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
FastTop1Function.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
int |
TopNBuffer.put(RowData sortKey,
RowData value)
Appends a record into the buffer.
|
void |
TopNBuffer.putAll(RowData sortKey,
Collection<RowData> values)
Puts a record list into the buffer under the sortKey.
|
void |
TopNBuffer.remove(RowData sortKey,
RowData value) |
void |
TopNBuffer.removeAll(RowData sortKey)
Removes all record list from the buffer under the sortKey.
|
Modifier and Type | Method and Description |
---|---|
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow,
long rank) |
void |
RetractableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AppendOnlyTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
AppendOnlyFirstNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
UpdatableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
FastTop1Function.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
TopNBuffer.putAll(RowData sortKey,
Collection<RowData> values)
Puts a record list into the buffer under the sortKey.
|
Constructor and Description |
---|
AppendOnlyFirstNFunction(StateTtlConfig ttlConfig,
InternalTypeInfo<RowData> inputRowType,
GeneratedRecordComparator sortKeyGeneratedRecordComparator,
RowDataKeySelector sortKeySelector,
RankType rankType,
RankRange rankRange,
boolean generateUpdateBefore,
boolean outputRankNumber) |
AppendOnlyTopNFunction(StateTtlConfig ttlConfig,
InternalTypeInfo<RowData> inputRowType,
GeneratedRecordComparator sortKeyGeneratedRecordComparator,
RowDataKeySelector sortKeySelector,
RankType rankType,
RankRange rankRange,
boolean generateUpdateBefore,
boolean outputRankNumber,
long cacheSize) |
FastTop1Function(StateTtlConfig ttlConfig,
InternalTypeInfo<RowData> inputRowType,
GeneratedRecordComparator generatedSortKeyComparator,
RowDataKeySelector sortKeySelector,
RankType rankType,
RankRange rankRange,
boolean generateUpdateBefore,
boolean outputRankNumber,
long cacheSize) |
RetractableTopNFunction(StateTtlConfig ttlConfig,
InternalTypeInfo<RowData> inputRowType,
ComparableRecordComparator comparableRecordComparator,
RowDataKeySelector sortKeySelector,
RankType rankType,
RankRange rankRange,
GeneratedRecordEqualiser generatedEqualiser,
boolean generateUpdateBefore,
boolean outputRankNumber) |
TopNBuffer(Comparator<RowData> sortKeyComparator,
java.util.function.Supplier<Collection<RowData>> valueSupplier) |
TopNBuffer(Comparator<RowData> sortKeyComparator,
java.util.function.Supplier<Collection<RowData>> valueSupplier) |
UpdatableTopNFunction(StateTtlConfig ttlConfig,
InternalTypeInfo<RowData> inputRowType,
RowDataKeySelector rowKeySelector,
GeneratedRecordComparator generatedRecordComparator,
RowDataKeySelector sortKeySelector,
RankType rankType,
RankRange rankRange,
boolean generateUpdateBefore,
boolean outputRankNumber,
long cacheSize) |
Modifier and Type | Method and Description |
---|---|
SlicingWindowOperator<RowData,?> |
WindowRankOperatorBuilder.build() |
Modifier and Type | Method and Description |
---|---|
WindowRankOperatorBuilder |
WindowRankOperatorBuilder.inputSerializer(AbstractRowDataSerializer<RowData> inputSerializer) |
WindowRankOperatorBuilder |
WindowRankOperatorBuilder.keySerializer(PagedTypeSerializer<RowData> keySerializer) |
Modifier and Type | Method and Description |
---|---|
void |
TopNRecordsCombiner.combine(WindowKey windowKey,
Iterator<RowData> records) |
RecordsCombiner |
TopNRecordsCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime) |
Modifier and Type | Method and Description |
---|---|
boolean |
WindowRankProcessor.processElement(RowData key,
RowData element) |
Constructor and Description |
---|
WindowRankProcessor(TypeSerializer<RowData> inputSerializer,
GeneratedRecordComparator genSortKeyComparator,
TypeSerializer<RowData> sortKeySerializer,
WindowBuffer.Factory bufferFactory,
long rankStart,
long rankEnd,
boolean outputRankNumber,
int windowEndIndex,
java.time.ZoneId shiftTimeZone) |
WindowRankProcessor(TypeSerializer<RowData> inputSerializer,
GeneratedRecordComparator genSortKeyComparator,
TypeSerializer<RowData> sortKeySerializer,
WindowBuffer.Factory bufferFactory,
long rankStart,
long rankEnd,
boolean outputRankNumber,
int windowEndIndex,
java.time.ZoneId shiftTimeZone) |
Modifier and Type | Method and Description |
---|---|
void |
OutputConversionOperator.processElement(StreamRecord<RowData> element) |
void |
SinkOperator.processElement(StreamRecord<RowData> element) |
void |
ConstraintEnforcer.processElement(StreamRecord<RowData> element) |
void |
StreamRecordTimestampInserter.processElement(StreamRecord<RowData> element) |
void |
SinkUpsertMaterializer.processElement(StreamRecord<RowData> element) |
Constructor and Description |
---|
SinkOperator(SinkFunction<RowData> sinkFunction,
int rowtimeFieldIndex) |
SinkUpsertMaterializer(StateTtlConfig ttlConfig,
TypeSerializer<RowData> serializer,
GeneratedRecordEqualiser generatedRecordEqualiser,
GeneratedRecordEqualiser generatedUpsertKeyEqualiser,
int[] inputUpsertKey) |
Modifier and Type | Method and Description |
---|---|
boolean |
BinaryInMemorySortBuffer.write(RowData record)
Writes a given record to this sort buffer.
|
void |
BinaryExternalSorter.write(RowData current) |
protected void |
BinaryIndexedSortable.writeIndexAndNormalizedKey(RowData record,
long currOffset)
Write of index and normalizedKey.
|
Modifier and Type | Method and Description |
---|---|
static BinaryInMemorySortBuffer |
BinaryInMemorySortBuffer.createBuffer(NormalizedKeyComputer normalizedKeyComputer,
AbstractRowDataSerializer<RowData> inputSerializer,
BinaryRowDataSerializer serializer,
RecordComparator comparator,
MemorySegmentPool memoryPool)
Create a memory sorter in `insert` way.
|
void |
ProcTimeSortOperator.onEventTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
RowTimeSortOperator.onEventTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
ProcTimeSortOperator.onProcessingTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
RowTimeSortOperator.onProcessingTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
RankOperator.processElement(StreamRecord<RowData> element) |
void |
ProcTimeSortOperator.processElement(StreamRecord<RowData> element) |
void |
LimitOperator.processElement(StreamRecord<RowData> element) |
void |
SortOperator.processElement(StreamRecord<RowData> element) |
void |
StreamSortOperator.processElement(StreamRecord<RowData> element) |
void |
SortLimitOperator.processElement(StreamRecord<RowData> element) |
void |
RowTimeSortOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
RowData |
ValuesInputFormat.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
InternalTypeInfo<RowData> |
ValuesInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
ValuesInputFormat.nextRecord(RowData reuse) |
Constructor and Description |
---|
ValuesInputFormat(GeneratedInput<GenericInputFormat<RowData>> generatedInput,
InternalTypeInfo<RowData> returnType) |
ValuesInputFormat(GeneratedInput<GenericInputFormat<RowData>> generatedInput,
InternalTypeInfo<RowData> returnType) |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
WindowOperator.collector
This is used for emitting elements with a given timestamp.
|
protected InternalValueState<K,W,RowData> |
WindowOperator.previousState |
Modifier and Type | Method and Description |
---|---|
void |
WindowOperator.processElement(StreamRecord<RowData> record) |
void |
WindowTableFunctionOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
Collection<TimeWindow> |
SlidingWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<CountWindow> |
CountTumblingWindowAssigner.assignWindows(RowData element,
long timestamp) |
abstract Collection<W> |
WindowAssigner.assignWindows(RowData element,
long timestamp)
Given the timestamp and element, returns the set of windows into which it should be placed.
|
Collection<CountWindow> |
CountSlidingWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<TimeWindow> |
TumblingWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<TimeWindow> |
SessionWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<TimeWindow> |
CumulativeWindowAssigner.assignWindows(RowData element,
long timestamp) |
Modifier and Type | Method and Description |
---|---|
void |
RecordsCombiner.combine(WindowKey windowKey,
Iterator<RowData> records)
Combines the buffered data into state based on the given window-key pair.
|
RecordsCombiner |
RecordsCombiner.LocalFactory.createRecordsCombiner(RuntimeContext runtimeContext,
Collector<RowData> collector) |
RecordsCombiner |
RecordsCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime)
Creates a
RecordsCombiner that can combine buffered data into states. |
Modifier and Type | Method and Description |
---|---|
RowData |
InternalWindowProcessFunction.Context.getWindowAccumulators(W window)
Gets the accumulators of the given window.
|
Modifier and Type | Method and Description |
---|---|
Collection<W> |
PanedWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp) |
abstract Collection<W> |
InternalWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp)
Assigns the input element into the actual windows which the
Trigger should trigger
on. |
Collection<W> |
MergingWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp) |
Collection<W> |
GeneralWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp) |
Collection<W> |
PanedWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp) |
abstract Collection<W> |
InternalWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp)
Assigns the input element into the state namespace which the input element should be
accumulated/retracted into.
|
Collection<W> |
MergingWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp) |
Collection<W> |
GeneralWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp) |
void |
InternalWindowProcessFunction.Context.setWindowAccumulators(W window,
RowData acc)
Sets the accumulators of the given window.
|
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
SlicingWindowOperator.collector
This is used for emitting elements with a given timestamp.
|
Modifier and Type | Method and Description |
---|---|
KeyedStateBackend<RowData> |
SlicingWindowProcessor.Context.getKeyedStateBackend()
Returns the current
KeyedStateBackend . |
Modifier and Type | Method and Description |
---|---|
long |
SliceAssigner.assignSliceEnd(RowData element,
ClockService clock)
Returns the end timestamp of a slice that the given element should belong.
|
long |
SliceAssigners.WindowedSliceAssigner.assignSliceEnd(RowData element,
ClockService clock) |
void |
SlicingWindowProcessor.Context.output(RowData result)
Outputs results to downstream operators.
|
boolean |
SlicingWindowProcessor.processElement(RowData key,
RowData element)
Process an element with associated key from the input stream.
|
Modifier and Type | Method and Description |
---|---|
void |
SlicingWindowOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
RowData |
WindowValueState.value(W window)
Returns the current value for the state under current key and the given window.
|
Modifier and Type | Method and Description |
---|---|
Iterable<Map.Entry<RowData,UV>> |
WindowMapState.entries(W window)
Returns all the mappings in the state.
|
List<RowData> |
WindowListState.get(W window) |
Iterator<Map.Entry<RowData,UV>> |
WindowMapState.iterator(W window)
Iterates over all the mappings in the state.
|
Iterable<RowData> |
WindowMapState.keys(W window)
Returns all the keys in the state.
|
Modifier and Type | Method and Description |
---|---|
void |
WindowListState.add(W window,
RowData value)
Updates the operator state accessible by
#get(W) )} by adding the given value to the
list of values. |
boolean |
WindowMapState.contains(W window,
RowData key)
Returns whether there exists the given mapping.
|
UV |
WindowMapState.get(W window,
RowData key)
Returns the current value associated with the given key.
|
void |
WindowMapState.put(W window,
RowData key,
UV value)
Associates a new value with the given key.
|
void |
WindowMapState.remove(W window,
RowData key)
Deletes the mapping of the given key.
|
void |
StateKeyContext.setCurrentKey(RowData key)
Sets current state key to given value.
|
void |
WindowValueState.update(W window,
RowData value)
Update the state with the given value under current key and the given window.
|
Modifier and Type | Method and Description |
---|---|
void |
WindowMapState.putAll(W window,
Map<RowData,UV> map)
Copies all of the mappings from the given map into the state.
|
Constructor and Description |
---|
WindowListState(InternalListState<RowData,W,RowData> windowState) |
WindowListState(InternalListState<RowData,W,RowData> windowState) |
WindowMapState(InternalMapState<RowData,W,RowData,UV> windowState) |
WindowMapState(InternalMapState<RowData,W,RowData,UV> windowState) |
WindowValueState(InternalValueState<RowData,W,RowData> windowState) |
WindowValueState(InternalValueState<RowData,W,RowData> windowState) |
Modifier and Type | Method and Description |
---|---|
Watermark |
PunctuatedWatermarkAssignerWrapper.checkAndGetNextWatermark(RowData row,
long extractedTimestamp) |
Long |
BoundedOutOfOrderWatermarkGenerator.currentWatermark(RowData row) |
long |
PeriodicWatermarkAssignerWrapper.extractTimestamp(RowData row,
long recordTimestamp) |
long |
PunctuatedWatermarkAssignerWrapper.extractTimestamp(RowData element,
long recordTimestamp) |
Modifier and Type | Method and Description |
---|---|
void |
RowTimeMiniBatchAssginerOperator.processElement(StreamRecord<RowData> element) |
void |
WatermarkAssignerOperator.processElement(StreamRecord<RowData> element) |
void |
ProcTimeMiniBatchAssignerOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
StreamPartitioner<RowData> |
BinaryHashPartitioner.copy() |
Modifier and Type | Method and Description |
---|---|
int |
BinaryHashPartitioner.selectChannel(SerializationDelegate<StreamRecord<RowData>> record) |
Modifier and Type | Class and Description |
---|---|
class |
AbstractRowDataSerializer<T extends RowData>
Row serializer, provided paged serialize paged method.
|
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.copy(RowData from) |
RowData |
RowDataSerializer.copy(RowData from,
RowData reuse) |
RowData |
RowDataSerializer.createInstance() |
RowData |
RowDataSerializer.deserialize(DataInputView source) |
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
RowData |
RowDataSerializer.deserializeFromPages(AbstractPagedInputView source) |
RowData |
RowDataSerializer.deserializeFromPages(RowData reuse,
AbstractPagedInputView source) |
RowData |
RowDataSerializer.mapFromPages(RowData reuse,
AbstractPagedInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeSerializer<RowData> |
RowDataSerializer.duplicate() |
static InternalTypeInfo<RowData> |
InternalTypeInfo.of(RowType type)
Creates type information for a
RowType represented by internal data structures. |
static InternalTypeInfo<RowData> |
InternalTypeInfo.ofFields(LogicalType... fieldTypes)
Creates type information for
RowType represented by internal data structures. |
static InternalTypeInfo<RowData> |
InternalTypeInfo.ofFields(LogicalType[] fieldTypes,
String[] fieldNames)
Creates type information for
RowType represented by internal data structures. |
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
TypeSerializerSnapshot<RowData> |
RowDataSerializer.snapshotConfiguration() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.copy(RowData from) |
RowData |
RowDataSerializer.copy(RowData from,
RowData reuse) |
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
RowData |
RowDataSerializer.deserializeFromPages(RowData reuse,
AbstractPagedInputView source) |
RowData |
RowDataSerializer.mapFromPages(RowData reuse,
AbstractPagedInputView source) |
void |
RowDataSerializer.serialize(RowData row,
DataOutputView target) |
int |
RowDataSerializer.serializeToPages(RowData row,
AbstractPagedOutputView target) |
BinaryRowData |
RowDataSerializer.toBinaryRow(RowData row)
Convert
RowData into BinaryRowData . |
OUT |
PythonTypeUtils.DataConverter.toExternal(RowData row,
int column) |
Modifier and Type | Method and Description |
---|---|
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
Constructor and Description |
---|
WindowKeySerializer(PagedTypeSerializer<RowData> keySerializer) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.deserialize(DataInputView source) |
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
TypeSerializerSnapshot<RowData> |
RowDataSerializer.snapshotConfiguration() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
void |
RowDataSerializer.serialize(RowData row,
DataOutputView target) |
Modifier and Type | Method and Description |
---|---|
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
Modifier and Type | Interface and Description |
---|---|
interface |
RowIterator<T extends RowData>
An internal iterator interface which presents a more restrictive API than
Iterator . |
Modifier and Type | Method and Description |
---|---|
RowData |
WindowKey.getKey() |
Modifier and Type | Method and Description |
---|---|
void |
ResettableRowBuffer.add(RowData row)
Appends the specified row to the end of this buffer.
|
void |
ResettableExternalBuffer.add(RowData row) |
WindowKey |
WindowKey.replace(long window,
RowData key)
Replace the currently stored key and window by the given new key and new window.
|
Constructor and Description |
---|
WindowKey(long window,
RowData key) |
Modifier and Type | Method and Description |
---|---|
KeyValueIterator<K,Iterator<RowData>> |
AbstractBytesMultiMap.getEntryIterator(boolean requiresCopy) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractBytesMultiMap.append(BytesMap.LookupInfo<K,Iterator<RowData>> lookupInfo,
BinaryRowData value)
Append an value into the hash map's record area.
|
Constructor and Description |
---|
WindowBytesHashMap(Object owner,
MemoryManager memoryManager,
long memorySize,
PagedTypeSerializer<RowData> keySer,
int valueArity) |
WindowBytesMultiMap(Object owner,
MemoryManager memoryManager,
long memorySize,
PagedTypeSerializer<RowData> keySer,
int valueArity) |
Modifier and Type | Method and Description |
---|---|
String[] |
RowDataToStringConverter.convert(RowData rowData) |
String[] |
TableauStyle.rowFieldsToString(RowData row) |
Modifier and Type | Method and Description |
---|---|
void |
PrintStyle.print(Iterator<RowData> it,
PrintWriter printWriter)
Displays the result.
|
void |
RawContentStyle.print(Iterator<RowData> it,
PrintWriter printWriter) |
void |
TableauStyle.print(Iterator<RowData> it,
PrintWriter printWriter) |
Modifier and Type | Method and Description |
---|---|
static InputFormat<RowData,?> |
PythonTableUtils.getInputFormat(List<Object[]> data,
DataType dataType)
Wrap the unpickled python data with an InputFormat.
|
Copyright © 2014–2022 The Apache Software Foundation. All rights reserved.