Modifier and Type | Method and Description |
---|---|
protected Row |
HBaseRowInputFormat.mapResultToOutType(org.apache.hadoop.hbase.client.Result res) |
Modifier and Type | Method and Description |
---|---|
DataSet<Row> |
HBaseTableSource.getDataSet(ExecutionEnvironment execEnv) |
TypeInformation<Row> |
HBaseRowInputFormat.getProducedType() |
TypeInformation<Row> |
HBaseTableSource.getReturnType() |
Modifier and Type | Method and Description |
---|---|
protected Row |
RowCsvInputFormat.fillRecord(Row reuse,
Object[] parsedValues) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
RowCsvInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
protected Row |
RowCsvInputFormat.fillRecord(Row reuse,
Object[] parsedValues) |
Modifier and Type | Method and Description |
---|---|
Row |
JDBCInputFormat.nextRecord(Row row)
Stores the next resultSet row in a tuple
|
Modifier and Type | Method and Description |
---|---|
Row |
JDBCInputFormat.nextRecord(Row row)
Stores the next resultSet row in a tuple
|
void |
JDBCOutputFormat.writeRecord(Row row)
Adds a record to the prepared statement.
|
Modifier and Type | Method and Description |
---|---|
TypeComparator<Row> |
RowTypeInfo.createComparator(int[] logicalKeyFields,
boolean[] orders,
int logicalFieldOffset,
ExecutionConfig config) |
TypeSerializer<Row> |
RowTypeInfo.createSerializer(ExecutionConfig config) |
protected CompositeType.TypeComparatorBuilder<Row> |
RowTypeInfo.createTypeComparatorBuilder() |
Modifier and Type | Method and Description |
---|---|
Row |
RowSerializer.copy(Row from) |
Row |
RowSerializer.copy(Row from,
Row reuse) |
Row |
RowSerializer.createInstance() |
Row |
RowSerializer.deserialize(DataInputView source) |
Row |
RowSerializer.deserialize(Row reuse,
DataInputView source) |
Row |
RowComparator.readWithKeyDenormalization(Row reuse,
DataInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeSerializer<Row> |
RowSerializer.duplicate() |
TypeComparator<Row> |
RowComparator.duplicate() |
CompatibilityResult<Row> |
RowSerializer.ensureCompatibility(TypeSerializerConfigSnapshot configSnapshot) |
Modifier and Type | Method and Description |
---|---|
int |
RowComparator.compare(Row first,
Row second) |
Row |
RowSerializer.copy(Row from) |
Row |
RowSerializer.copy(Row from,
Row reuse) |
Row |
RowSerializer.deserialize(Row reuse,
DataInputView source) |
boolean |
RowComparator.equalToReference(Row candidate) |
int |
RowComparator.hash(Row record) |
void |
RowComparator.putNormalizedKey(Row record,
MemorySegment target,
int offset,
int numBytes) |
Row |
RowComparator.readWithKeyDenormalization(Row reuse,
DataInputView source) |
void |
RowSerializer.serialize(Row record,
DataOutputView target) |
void |
RowComparator.setReference(Row toCompare) |
static void |
NullMaskUtils.writeNullMask(int len,
Row value,
DataOutputView target) |
void |
RowComparator.writeWithKeyNormalization(Row record,
DataOutputView target) |
Modifier and Type | Method and Description |
---|---|
int |
RowComparator.compareToReference(TypeComparator<Row> referencedComparator) |
Modifier and Type | Field and Description |
---|---|
protected FlinkKafkaPartitioner<Row> |
KafkaTableSink.partitioner |
protected SerializationSchema<Row> |
KafkaTableSink.serializationSchema |
Modifier and Type | Method and Description |
---|---|
protected abstract FlinkKafkaProducerBase<Row> |
KafkaTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner)
Returns the version-specifid Kafka producer.
|
protected FlinkKafkaProducerBase<Row> |
Kafka09JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka08JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner) |
protected abstract SerializationSchema<Row> |
KafkaTableSink.createSerializationSchema(String[] fieldNames)
Create serialization schema for converting table rows into bytes.
|
protected SerializationSchema<Row> |
KafkaJsonTableSink.createSerializationSchema(String[] fieldNames) |
DataStream<Row> |
KafkaTableSource.getDataStream(StreamExecutionEnvironment env)
NOTE: This method is for internal use only for defining a TableSource.
|
protected DeserializationSchema<Row> |
KafkaTableSource.getDeserializationSchema()
Returns the deserialization schema.
|
TypeInformation<Row> |
KafkaTableSink.getOutputType() |
TypeInformation<Row> |
KafkaTableSource.getReturnType() |
Modifier and Type | Method and Description |
---|---|
protected abstract FlinkKafkaProducerBase<Row> |
KafkaTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner)
Returns the version-specifid Kafka producer.
|
protected abstract FlinkKafkaProducerBase<Row> |
KafkaTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner)
Returns the version-specifid Kafka producer.
|
protected FlinkKafkaProducerBase<Row> |
Kafka09JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka09JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka08JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka08JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner) |
void |
KafkaTableSink.emitDataStream(DataStream<Row> dataStream) |
Constructor and Description |
---|
Kafka010JsonTableSource(String topic,
Properties properties,
TypeInformation<Row> typeInfo)
Creates a Kafka 0.10 JSON
StreamTableSource . |
Kafka010TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
TypeInformation<Row> typeInfo)
Creates a Kafka 0.10
StreamTableSource . |
Kafka010TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
TypeInformation<Row> typeInfo)
Creates a Kafka 0.10
StreamTableSource . |
Kafka08JsonTableSink(String topic,
Properties properties,
FlinkKafkaPartitioner<Row> partitioner)
Creates
KafkaTableSink for Kafka 0.8 |
Kafka08JsonTableSink(String topic,
Properties properties,
KafkaPartitioner<Row> partitioner)
Deprecated.
This is a deprecated constructor that does not correctly handle partitioning when
producing to multiple topics. Use
Kafka08JsonTableSink.Kafka08JsonTableSink(String, Properties, FlinkKafkaPartitioner) instead. |
Kafka08JsonTableSource(String topic,
Properties properties,
TypeInformation<Row> typeInfo)
Creates a Kafka 0.8 JSON
StreamTableSource . |
Kafka08TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
TypeInformation<Row> typeInfo)
Creates a Kafka 0.8
StreamTableSource . |
Kafka08TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
TypeInformation<Row> typeInfo)
Creates a Kafka 0.8
StreamTableSource . |
Kafka09JsonTableSink(String topic,
Properties properties,
FlinkKafkaPartitioner<Row> partitioner)
Creates
KafkaTableSink for Kafka 0.9 |
Kafka09JsonTableSink(String topic,
Properties properties,
KafkaPartitioner<Row> partitioner)
Deprecated.
This is a deprecated constructor that does not correctly handle partitioning when
producing to multiple topics. Use
Kafka09JsonTableSink.Kafka09JsonTableSink(String, Properties, FlinkKafkaPartitioner) instead. |
Kafka09JsonTableSource(String topic,
Properties properties,
TypeInformation<Row> typeInfo)
Creates a Kafka 0.9 JSON
StreamTableSource . |
Kafka09TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
TypeInformation<Row> typeInfo)
Creates a Kafka 0.9
StreamTableSource . |
Kafka09TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
TypeInformation<Row> typeInfo)
Creates a Kafka 0.9
StreamTableSource . |
KafkaJsonTableSink(String topic,
Properties properties,
FlinkKafkaPartitioner<Row> partitioner)
Creates KafkaJsonTableSink
|
KafkaTableSink(String topic,
Properties properties,
FlinkKafkaPartitioner<Row> partitioner)
Creates KafkaTableSink
|
Modifier and Type | Method and Description |
---|---|
Row |
JsonRowDeserializationSchema.deserialize(byte[] message) |
Row |
AvroRowDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
JsonRowDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
JsonRowDeserializationSchema.isEndOfStream(Row nextElement) |
byte[] |
JsonRowSerializationSchema.serialize(Row row) |
byte[] |
AvroRowSerializationSchema.serialize(Row row) |
Constructor and Description |
---|
JsonRowDeserializationSchema(TypeInformation<Row> typeInfo)
Creates a JSON deserialization schema for the given fields and types.
|
Modifier and Type | Method and Description |
---|---|
protected <OUT> GeneratedFunction<MapFunction<Row,OUT>,OUT> |
TableEnvironment.generateRowConverterFunction(TypeInformation<Row> inputTypeInfo,
RowSchema schema,
TypeInformation<OUT> requestedTypeInfo,
String functionName) |
static TypeInformation<Row> |
Types.ROW(scala.collection.Seq<TypeInformation<?>> types)
Generates row type information.
|
TypeInformation<Row> |
Types$.ROW(scala.collection.Seq<TypeInformation<?>> types)
Generates row type information.
|
static TypeInformation<Row> |
Types.ROW(String[] names,
TypeInformation<?>[] types)
Generates row type information.
|
TypeInformation<Row> |
Types$.ROW(String[] names,
TypeInformation<?>[] types)
Generates row type information.
|
static TypeInformation<Row> |
Types.ROW(TypeInformation<?>... types)
Generates row type information.
|
TypeInformation<Row> |
Types$.ROW(TypeInformation<?>... types)
Generates row type information.
|
Modifier and Type | Method and Description |
---|---|
protected <OUT> GeneratedFunction<MapFunction<Row,OUT>,OUT> |
TableEnvironment.generateRowConverterFunction(TypeInformation<Row> inputTypeInfo,
RowSchema schema,
TypeInformation<OUT> requestedTypeInfo,
String functionName) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
FlinkTypeFactory$.toInternalRowTypeInfo(org.apache.calcite.rel.type.RelDataType logicalRowType)
Deprecated.
Use the RowSchema class instead because it handles both logical and physical rows. Since .
|
static TypeInformation<Row> |
FlinkTypeFactory.toInternalRowTypeInfo(org.apache.calcite.rel.type.RelDataType logicalRowType)
Deprecated.
Use the RowSchema class instead because it handles both logical and physical rows. Since .
|
Modifier and Type | Method and Description |
---|---|
<T extends Row> |
CodeGenerator.generateValuesInputFormat(String name,
scala.collection.Seq<String> records,
TypeInformation<T> returnType)
Generates a values input format that can be passed to Java compiler.
|
Modifier and Type | Method and Description |
---|---|
<F extends Function> |
CommonScan.generatedConversionFunction(TableConfig config,
Class<F> functionClass,
TypeInformation<Object> inputType,
TypeInformation<Row> expectedType,
String conversionOperatorName,
scala.collection.Seq<String> fieldNames,
scala.Option<int[]> inputFieldMapping) |
<T extends Function> |
CommonCalc.generateFunction(CodeGenerator generator,
String ruleDescription,
RowSchema inputSchema,
RowSchema returnSchema,
org.apache.calcite.rex.RexProgram calcProgram,
TableConfig config,
Class<T> functionClass) |
<T extends Function> |
CommonCorrelate.generateFunction(TableConfig config,
RowSchema inputSchema,
TypeInformation<Object> udtfTypeInfo,
RowSchema returnSchema,
org.apache.calcite.sql.SemiJoinType joinType,
org.apache.calcite.rex.RexCall rexCall,
scala.Option<int[]> pojoFieldMapping,
String ruleDescription,
Class<T> functionClass)
Generates the flat map function to run the user-defined table function.
|
Modifier and Type | Method and Description |
---|---|
<F extends Function> |
CommonScan.generatedConversionFunction(TableConfig config,
Class<F> functionClass,
TypeInformation<Object> inputType,
TypeInformation<Row> expectedType,
String conversionOperatorName,
scala.collection.Seq<String> fieldNames,
scala.Option<int[]> inputFieldMapping) |
Modifier and Type | Method and Description |
---|---|
static <T extends Function> |
FlinkLogicalCalc.generateFunction(CodeGenerator generator,
String ruleDescription,
RowSchema inputSchema,
RowSchema returnSchema,
org.apache.calcite.rex.RexProgram calcProgram,
TableConfig config,
Class<T> functionClass) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
RowSchema.physicalTypeInfo()
Returns a physical
TypeInformation of row with no logical fields (i.e. |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
FlatMapRunner.getProducedType() |
TypeInformation<Row> |
FlatMapRunner.returnType() |
Modifier and Type | Method and Description |
---|---|
void |
CRowWrappingCollector.collect(Row record) |
void |
FlatMapRunner.flatMap(Row in,
Collector<Row> out) |
Modifier and Type | Method and Description |
---|---|
void |
FlatMapRunner.flatMap(Row in,
Collector<Row> out) |
Constructor and Description |
---|
FlatMapRunner(String name,
String code,
TypeInformation<Row> returnType) |
Modifier and Type | Method and Description |
---|---|
protected Row |
DataSetSlideWindowAggReduceGroupFunction.accumulators() |
protected Row |
DataSetTumbleTimeWindowAggReduceGroupFunction.accumulators() |
protected Row |
DataSetTumbleTimeWindowAggReduceGroupFunction.aggregateBuffer() |
Row |
DataSetSlideWindowAggReduceCombineFunction.combine(Iterable<Row> records) |
Row |
DataSetTumbleTimeWindowAggReduceCombineFunction.combine(Iterable<Row> records)
For sub-grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
|
Row |
DataSetSlideTimeWindowAggReduceGroupFunction.combine(Iterable<Row> records) |
Row |
AggregateAggFunction.createAccumulator() |
abstract Row |
GeneratedAggregations.createAccumulators()
Initializes the accumulators and save them to a accumulators row.
|
abstract Row |
GeneratedAggregations.createOutputRow()
Creates an output row object with the correct arity.
|
Row |
AggregateAggFunction.getResult(Row accumulatorRow) |
Row |
CRowTimeWindowPropertyCollector.getRow(CRow record) |
Row |
RowTimeWindowPropertyCollector.getRow(Row record) |
abstract Row |
TimeWindowPropertyCollector.getRow(T record) |
protected Row |
DataSetSlideTimeWindowAggReduceGroupFunction.intermediateRow() |
Row |
DataSetWindowAggMapFunction.map(Row input) |
Row |
AggregateAggFunction.merge(Row aAccumulatorRow,
Row bAccumulatorRow) |
abstract Row |
GeneratedAggregations.mergeAccumulatorsPair(Row a,
Row b)
Merges two rows of accumulators into one row.
|
Row |
TimeWindowPropertyCollector.output() |
Row |
DistinctReduce.reduce(Row value1,
Row value2) |
Modifier and Type | Method and Description |
---|---|
static AllWindowFunction<Row,CRow,Window> |
AggregateUtil.createAggregationAllWindowFunction(LogicalWindow window,
int finalRowArity,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction for non-partitioned window aggregates. |
AllWindowFunction<Row,CRow,Window> |
AggregateUtil$.createAggregationAllWindowFunction(LogicalWindow window,
int finalRowArity,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction for non-partitioned window aggregates. |
static WindowFunction<Row,CRow,Tuple,Window> |
AggregateUtil.createAggregationGroupWindowFunction(LogicalWindow window,
int numGroupingKeys,
int numAggregates,
int finalRowArity,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction for group window aggregates. |
WindowFunction<Row,CRow,Tuple,Window> |
AggregateUtil$.createAggregationGroupWindowFunction(LogicalWindow window,
int numGroupingKeys,
int numAggregates,
int finalRowArity,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction for group window aggregates. |
static scala.Tuple3<scala.Option<DataSetPreAggFunction>,scala.Option<TypeInformation<Row>>,RichGroupReduceFunction<Row,Row>> |
AggregateUtil.createDataSetAggregateFunctions(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
boolean inGroupingSet)
Create functions to compute a
DataSetAggregate . |
static scala.Tuple3<scala.Option<DataSetPreAggFunction>,scala.Option<TypeInformation<Row>>,RichGroupReduceFunction<Row,Row>> |
AggregateUtil.createDataSetAggregateFunctions(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
boolean inGroupingSet)
Create functions to compute a
DataSetAggregate . |
static scala.Tuple3<scala.Option<DataSetPreAggFunction>,scala.Option<TypeInformation<Row>>,RichGroupReduceFunction<Row,Row>> |
AggregateUtil.createDataSetAggregateFunctions(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
boolean inGroupingSet)
Create functions to compute a
DataSetAggregate . |
scala.Tuple3<scala.Option<DataSetPreAggFunction>,scala.Option<TypeInformation<Row>>,RichGroupReduceFunction<Row,Row>> |
AggregateUtil$.createDataSetAggregateFunctions(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
boolean inGroupingSet)
Create functions to compute a
DataSetAggregate . |
scala.Tuple3<scala.Option<DataSetPreAggFunction>,scala.Option<TypeInformation<Row>>,RichGroupReduceFunction<Row,Row>> |
AggregateUtil$.createDataSetAggregateFunctions(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
boolean inGroupingSet)
Create functions to compute a
DataSetAggregate . |
scala.Tuple3<scala.Option<DataSetPreAggFunction>,scala.Option<TypeInformation<Row>>,RichGroupReduceFunction<Row,Row>> |
AggregateUtil$.createDataSetAggregateFunctions(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
boolean inGroupingSet)
Create functions to compute a
DataSetAggregate . |
static FlatMapFunction<Row,Row> |
AggregateUtil.createDataSetSlideWindowPrepareFlatMapFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
TypeInformation<Row> inputType,
boolean isParserCaseSensitive)
Create a
FlatMapFunction that prepares for
non-incremental aggregates of sliding windows (time-windows). |
static FlatMapFunction<Row,Row> |
AggregateUtil.createDataSetSlideWindowPrepareFlatMapFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
TypeInformation<Row> inputType,
boolean isParserCaseSensitive)
Create a
FlatMapFunction that prepares for
non-incremental aggregates of sliding windows (time-windows). |
FlatMapFunction<Row,Row> |
AggregateUtil$.createDataSetSlideWindowPrepareFlatMapFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
TypeInformation<Row> inputType,
boolean isParserCaseSensitive)
Create a
FlatMapFunction that prepares for
non-incremental aggregates of sliding windows (time-windows). |
FlatMapFunction<Row,Row> |
AggregateUtil$.createDataSetSlideWindowPrepareFlatMapFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
TypeInformation<Row> inputType,
boolean isParserCaseSensitive)
Create a
FlatMapFunction that prepares for
non-incremental aggregates of sliding windows (time-windows). |
static RichGroupReduceFunction<Row,Row> |
AggregateUtil.createDataSetSlideWindowPrepareGroupReduceFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
boolean isParserCaseSensitive)
Create a
GroupReduceFunction that prepares for
partial aggregates of sliding windows (time and count-windows). |
static RichGroupReduceFunction<Row,Row> |
AggregateUtil.createDataSetSlideWindowPrepareGroupReduceFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
boolean isParserCaseSensitive)
Create a
GroupReduceFunction that prepares for
partial aggregates of sliding windows (time and count-windows). |
RichGroupReduceFunction<Row,Row> |
AggregateUtil$.createDataSetSlideWindowPrepareGroupReduceFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
boolean isParserCaseSensitive)
Create a
GroupReduceFunction that prepares for
partial aggregates of sliding windows (time and count-windows). |
RichGroupReduceFunction<Row,Row> |
AggregateUtil$.createDataSetSlideWindowPrepareGroupReduceFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
boolean isParserCaseSensitive)
Create a
GroupReduceFunction that prepares for
partial aggregates of sliding windows (time and count-windows). |
static GroupCombineFunction<Row,Row> |
AggregateUtil.createDataSetWindowAggregationCombineFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
int[] groupings)
Create a
GroupCombineFunction that pre-aggregation
for aggregates. |
static GroupCombineFunction<Row,Row> |
AggregateUtil.createDataSetWindowAggregationCombineFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
int[] groupings)
Create a
GroupCombineFunction that pre-aggregation
for aggregates. |
GroupCombineFunction<Row,Row> |
AggregateUtil$.createDataSetWindowAggregationCombineFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
int[] groupings)
Create a
GroupCombineFunction that pre-aggregation
for aggregates. |
GroupCombineFunction<Row,Row> |
AggregateUtil$.createDataSetWindowAggregationCombineFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
int[] groupings)
Create a
GroupCombineFunction that pre-aggregation
for aggregates. |
static RichGroupReduceFunction<Row,Row> |
AggregateUtil.createDataSetWindowAggregationGroupReduceFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties,
boolean isInputCombined)
Create a
GroupReduceFunction to compute window
aggregates on batch tables. |
static RichGroupReduceFunction<Row,Row> |
AggregateUtil.createDataSetWindowAggregationGroupReduceFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties,
boolean isInputCombined)
Create a
GroupReduceFunction to compute window
aggregates on batch tables. |
RichGroupReduceFunction<Row,Row> |
AggregateUtil$.createDataSetWindowAggregationGroupReduceFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties,
boolean isInputCombined)
Create a
GroupReduceFunction to compute window
aggregates on batch tables. |
RichGroupReduceFunction<Row,Row> |
AggregateUtil$.createDataSetWindowAggregationGroupReduceFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties,
boolean isInputCombined)
Create a
GroupReduceFunction to compute window
aggregates on batch tables. |
static MapPartitionFunction<Row,Row> |
AggregateUtil.createDataSetWindowAggregationMapPartitionFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
int[] groupings)
Create a
MapPartitionFunction that aggregation
for aggregates. |
static MapPartitionFunction<Row,Row> |
AggregateUtil.createDataSetWindowAggregationMapPartitionFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
int[] groupings)
Create a
MapPartitionFunction that aggregation
for aggregates. |
MapPartitionFunction<Row,Row> |
AggregateUtil$.createDataSetWindowAggregationMapPartitionFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
int[] groupings)
Create a
MapPartitionFunction that aggregation
for aggregates. |
MapPartitionFunction<Row,Row> |
AggregateUtil$.createDataSetWindowAggregationMapPartitionFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType physicalInputRowType,
scala.collection.Seq<TypeInformation<?>> physicalInputTypes,
int[] groupings)
Create a
MapPartitionFunction that aggregation
for aggregates. |
static MapFunction<Row,Row> |
AggregateUtil.createDataSetWindowPrepareMapFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
boolean isParserCaseSensitive)
Create a
MapFunction that prepares for aggregates. |
static MapFunction<Row,Row> |
AggregateUtil.createDataSetWindowPrepareMapFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
boolean isParserCaseSensitive)
Create a
MapFunction that prepares for aggregates. |
MapFunction<Row,Row> |
AggregateUtil$.createDataSetWindowPrepareMapFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
boolean isParserCaseSensitive)
Create a
MapFunction that prepares for aggregates. |
MapFunction<Row,Row> |
AggregateUtil$.createDataSetWindowPrepareMapFunction(CodeGenerator generator,
LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
boolean isParserCaseSensitive)
Create a
MapFunction that prepares for aggregates. |
static scala.Tuple3<AggregateFunction<CRow,Row,Row>,RowTypeInfo,RowTypeInfo> |
AggregateUtil.createDataStreamAggregateFunction(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupingKeys,
boolean needMerge) |
static scala.Tuple3<AggregateFunction<CRow,Row,Row>,RowTypeInfo,RowTypeInfo> |
AggregateUtil.createDataStreamAggregateFunction(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupingKeys,
boolean needMerge) |
scala.Tuple3<AggregateFunction<CRow,Row,Row>,RowTypeInfo,RowTypeInfo> |
AggregateUtil$.createDataStreamAggregateFunction(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupingKeys,
boolean needMerge) |
scala.Tuple3<AggregateFunction<CRow,Row,Row>,RowTypeInfo,RowTypeInfo> |
AggregateUtil$.createDataStreamAggregateFunction(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupingKeys,
boolean needMerge) |
TypeInformation<Row> |
DataSetWindowAggMapFunction.getProducedType() |
TypeInformation<Row> |
DataSetSessionWindowAggregatePreProcessor.getProducedType() |
TypeInformation<Row> |
DataSetSlideTimeWindowAggFlatMapFunction.getProducedType() |
TypeInformation<Row> |
DataSetSlideTimeWindowAggReduceGroupFunction.getProducedType() |
TypeInformation<Row> |
DataSetSessionWindowAggregatePreProcessor.intermediateRowType() |
Modifier and Type | Method and Description |
---|---|
abstract void |
GeneratedAggregations.accumulate(Row accumulators,
Row input)
Accumulates the input values to the accumulators.
|
void |
AggregateAggFunction.add(CRow value,
Row accumulatorRow) |
void |
DataSetSlideTimeWindowAggFlatMapFunction.flatMap(Row record,
Collector<Row> out) |
Row |
AggregateAggFunction.getResult(Row accumulatorRow) |
Row |
RowTimeWindowPropertyCollector.getRow(Row record) |
Row |
DataSetWindowAggMapFunction.map(Row input) |
Row |
AggregateAggFunction.merge(Row aAccumulatorRow,
Row bAccumulatorRow) |
abstract Row |
GeneratedAggregations.mergeAccumulatorsPair(Row a,
Row b)
Merges two rows of accumulators into one row.
|
void |
RowTimeUnboundedRangeOver.processElementsWithSameTimestamp(List<Row> curRowList,
Row lastAccumulator,
Collector<CRow> out) |
void |
RowTimeUnboundedRowsOver.processElementsWithSameTimestamp(List<Row> curRowList,
Row lastAccumulator,
Collector<CRow> out) |
abstract void |
RowTimeUnboundedOver.processElementsWithSameTimestamp(List<Row> curRowList,
Row lastAccumulator,
Collector<CRow> out)
Process the same timestamp datas, the mechanism is different between
rows and range window.
|
Row |
DistinctReduce.reduce(Row value1,
Row value2) |
abstract void |
GeneratedAggregations.resetAccumulator(Row accumulators)
Resets all the accumulators.
|
abstract void |
GeneratedAggregations.retract(Row accumulators,
Row input)
Retracts the input values from the accumulators.
|
abstract void |
GeneratedAggregations.setAggregationResults(Row accumulators,
Row output)
Sets the results of the aggregations (partial or final) to the output row.
|
abstract void |
GeneratedAggregations.setConstantFlags(Row output)
Sets constant flags (boolean fields) to an output row.
|
abstract void |
GeneratedAggregations.setForwardedFields(Row input,
Row output)
Copies forwarded fields, such as grouping keys, from input row to output row.
|
Modifier and Type | Method and Description |
---|---|
void |
IncrementalAggregateAllTimeWindowFunction.apply(TimeWindow window,
Iterable<Row> records,
Collector<CRow> out) |
void |
IncrementalAggregateTimeWindowFunction.apply(Tuple key,
TimeWindow window,
Iterable<Row> records,
Collector<CRow> out) |
void |
IncrementalAggregateWindowFunction.apply(Tuple key,
W window,
Iterable<Row> records,
Collector<CRow> out)
Calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
void |
IncrementalAggregateAllWindowFunction.apply(W window,
Iterable<Row> records,
Collector<CRow> out)
Calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
Row |
DataSetSlideWindowAggReduceCombineFunction.combine(Iterable<Row> records) |
Row |
DataSetTumbleTimeWindowAggReduceCombineFunction.combine(Iterable<Row> records)
For sub-grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
|
Row |
DataSetSlideTimeWindowAggReduceGroupFunction.combine(Iterable<Row> records) |
void |
DataSetPreAggFunction.combine(Iterable<Row> values,
Collector<Row> out) |
void |
DataSetPreAggFunction.combine(Iterable<Row> values,
Collector<Row> out) |
void |
DataSetSessionWindowAggregatePreProcessor.combine(Iterable<Row> records,
Collector<Row> out)
For sub-grouped intermediate aggregate Rows, divide window based on the rowtime
(current'rowtime - previous’rowtime > gap), and then merge data (within a unified window)
into an aggregate buffer.
|
void |
DataSetSessionWindowAggregatePreProcessor.combine(Iterable<Row> records,
Collector<Row> out)
For sub-grouped intermediate aggregate Rows, divide window based on the rowtime
(current'rowtime - previous’rowtime > gap), and then merge data (within a unified window)
into an aggregate buffer.
|
static ProcessFunction<CRow,CRow> |
AggregateUtil.createBoundedOverProcessFunction(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
TypeInformation<Row> inputTypeInfo,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
long precedingOffset,
StreamQueryConfig queryConfig,
boolean isRowsClause,
boolean isRowTimeType)
Create an
ProcessFunction for ROWS clause
bounded OVER window to evaluate final aggregate value. |
ProcessFunction<CRow,CRow> |
AggregateUtil$.createBoundedOverProcessFunction(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
TypeInformation<Row> inputTypeInfo,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
long precedingOffset,
StreamQueryConfig queryConfig,
boolean isRowsClause,
boolean isRowTimeType)
Create an
ProcessFunction for ROWS clause
bounded OVER window to evaluate final aggregate value. |
static FlatMapFunction<Row,Row> |
AggregateUtil.createDataSetSlideWindowPrepareFlatMapFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
TypeInformation<Row> inputType,
boolean isParserCaseSensitive)
Create a
FlatMapFunction that prepares for
non-incremental aggregates of sliding windows (time-windows). |
FlatMapFunction<Row,Row> |
AggregateUtil$.createDataSetSlideWindowPrepareFlatMapFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
TypeInformation<Row> inputType,
boolean isParserCaseSensitive)
Create a
FlatMapFunction that prepares for
non-incremental aggregates of sliding windows (time-windows). |
static ProcessFunction<CRow,CRow> |
AggregateUtil.createUnboundedOverProcessFunction(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
TypeInformation<Row> inputTypeInfo,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
StreamQueryConfig queryConfig,
boolean isRowTimeType,
boolean isPartitioned,
boolean isRowsClause)
Create an
ProcessFunction for unbounded OVER
window to evaluate final aggregate value. |
ProcessFunction<CRow,CRow> |
AggregateUtil$.createUnboundedOverProcessFunction(CodeGenerator generator,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
TypeInformation<Row> inputTypeInfo,
scala.collection.Seq<TypeInformation<?>> inputFieldTypeInfo,
StreamQueryConfig queryConfig,
boolean isRowTimeType,
boolean isPartitioned,
boolean isRowsClause)
Create an
ProcessFunction for unbounded OVER
window to evaluate final aggregate value. |
void |
DataSetSessionWindowAggregatePreProcessor.doCollect(Collector<Row> out,
long windowStart,
long windowEnd)
Emit the merged data of the current window.
|
void |
DataSetSessionWindowAggReduceGroupFunction.doEvaluateAndCollect(Collector<Row> out,
long windowStart,
long windowEnd)
Evaluate and emit the data of the current window.
|
void |
DataSetSlideTimeWindowAggFlatMapFunction.flatMap(Row record,
Collector<Row> out) |
void |
DataSetPreAggFunction.mapPartition(Iterable<Row> values,
Collector<Row> out) |
void |
DataSetPreAggFunction.mapPartition(Iterable<Row> values,
Collector<Row> out) |
void |
DataSetSessionWindowAggregatePreProcessor.mapPartition(Iterable<Row> records,
Collector<Row> out)
Divide window based on the rowtime
(current'rowtime - previous’rowtime > gap), and then merge data (within a unified window)
into an aggregate buffer.
|
void |
DataSetSessionWindowAggregatePreProcessor.mapPartition(Iterable<Row> records,
Collector<Row> out)
Divide window based on the rowtime
(current'rowtime - previous’rowtime > gap), and then merge data (within a unified window)
into an aggregate buffer.
|
void |
RowTimeUnboundedRangeOver.processElementsWithSameTimestamp(List<Row> curRowList,
Row lastAccumulator,
Collector<CRow> out) |
void |
RowTimeUnboundedRowsOver.processElementsWithSameTimestamp(List<Row> curRowList,
Row lastAccumulator,
Collector<CRow> out) |
abstract void |
RowTimeUnboundedOver.processElementsWithSameTimestamp(List<Row> curRowList,
Row lastAccumulator,
Collector<CRow> out)
Process the same timestamp datas, the mechanism is different between
rows and range window.
|
void |
DataSetSlideWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetSlideWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetTumbleCountWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetTumbleCountWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetAggFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetAggFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetFinalAggFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetFinalAggFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetTumbleTimeWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetTumbleTimeWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetSlideTimeWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetSlideTimeWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out) |
void |
DataSetSessionWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out)
For grouped intermediate aggregate Rows, divide window according to the window-start
and window-end, merge data (within a unified window) into an aggregate buffer, calculate
aggregated values output from aggregate buffer, and then set them into output
Row based on the mapping relationship between intermediate aggregate data and output data.
|
void |
DataSetSessionWindowAggReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out)
For grouped intermediate aggregate Rows, divide window according to the window-start
and window-end, merge data (within a unified window) into an aggregate buffer, calculate
aggregated values output from aggregate buffer, and then set them into output
Row based on the mapping relationship between intermediate aggregate data and output data.
|
Modifier and Type | Method and Description |
---|---|
Row |
ValuesInputFormat.nextRecord(Row reuse) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
ValuesInputFormat.getProducedType() |
TypeInformation<Row> |
ValuesInputFormat.returnType() |
Modifier and Type | Method and Description |
---|---|
Row |
ValuesInputFormat.nextRecord(Row reuse) |
Constructor and Description |
---|
ValuesInputFormat(String name,
String code,
TypeInformation<Row> returnType) |
Modifier and Type | Method and Description |
---|---|
Row |
CRow.row() |
Modifier and Type | Method and Description |
---|---|
TypeComparator<Row> |
CRowComparator.rowComp() |
TypeSerializer<Row> |
CRowSerializer.rowSerializer() |
Modifier and Type | Method and Description |
---|---|
CRow |
CRow$.apply(Row row,
boolean change) |
static CRow |
CRow.apply(Row row,
boolean change) |
Modifier and Type | Method and Description |
---|---|
static CRowTypeInfo |
CRowTypeInfo.apply(TypeInformation<Row> rowType) |
CRowTypeInfo |
CRowTypeInfo$.apply(TypeInformation<Row> rowType) |
Constructor and Description |
---|
CRow(Row row,
boolean change) |
Constructor and Description |
---|
CRowComparator(TypeComparator<Row> rowComp) |
CRowSerializer(TypeSerializer<Row> rowSerializer) |
CRowSerializerConfigSnapshot(TypeSerializer<Row> rowSerializer) |
Modifier and Type | Method and Description |
---|---|
protected TableSinkBase<Row> |
CsvTableSink.copy() |
TypeInformation<Row> |
CsvTableSink.getOutputType() |
Modifier and Type | Method and Description |
---|---|
String |
CsvFormatter.map(Row row) |
Modifier and Type | Method and Description |
---|---|
void |
CsvTableSink.emitDataSet(DataSet<Row> dataSet) |
void |
CsvTableSink.emitDataStream(DataStream<Row> dataStream) |
Modifier and Type | Method and Description |
---|---|
DataSet<Row> |
CsvTableSource.getDataSet(ExecutionEnvironment execEnv)
Returns the data of the table as a
DataSet of Row . |
DataStream<Row> |
CsvTableSource.getDataStream(StreamExecutionEnvironment streamExecEnv)
Returns the data of the table as a
DataStream of Row . |
Modifier and Type | Method and Description |
---|---|
static Row |
Row.copy(Row row)
Creates a new Row which copied from another row.
|
static Row |
Row.of(Object... values)
Creates a new Row and assigns the given values to the Row's fields.
|
Modifier and Type | Method and Description |
---|---|
static Row |
Row.copy(Row row)
Creates a new Row which copied from another row.
|
Copyright © 2014–2018 The Apache Software Foundation. All rights reserved.