Modifier and Type | Method and Description |
---|---|
Row |
JDBCInputFormat.nextRecord(Row row)
Stores the next resultSet row in a tuple
|
Modifier and Type | Method and Description |
---|---|
Row |
JDBCInputFormat.nextRecord(Row row)
Stores the next resultSet row in a tuple
|
void |
JDBCOutputFormat.writeRecord(Row row)
Adds a record to the prepared statement.
|
Modifier and Type | Method and Description |
---|---|
Row |
AggregateReduceCombineFunction.combine(Iterable<Row> records)
For sub-grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
|
Modifier and Type | Method and Description |
---|---|
static scala.Tuple2<MapFunction<Object,Row>,GroupReduceFunction<Row,Row>> |
AggregateUtil.createOperatorFunctionsForAggregates(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
TableConfig config)
Create Flink operator functions for aggregates.
|
static scala.Tuple2<MapFunction<Object,Row>,GroupReduceFunction<Row,Row>> |
AggregateUtil.createOperatorFunctionsForAggregates(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
TableConfig config)
Create Flink operator functions for aggregates.
|
static scala.Tuple2<MapFunction<Object,Row>,GroupReduceFunction<Row,Row>> |
AggregateUtil.createOperatorFunctionsForAggregates(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
TableConfig config)
Create Flink operator functions for aggregates.
|
scala.Tuple2<MapFunction<Object,Row>,GroupReduceFunction<Row,Row>> |
AggregateUtil$.createOperatorFunctionsForAggregates(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
TableConfig config)
Create Flink operator functions for aggregates.
|
scala.Tuple2<MapFunction<Object,Row>,GroupReduceFunction<Row,Row>> |
AggregateUtil$.createOperatorFunctionsForAggregates(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
TableConfig config)
Create Flink operator functions for aggregates.
|
scala.Tuple2<MapFunction<Object,Row>,GroupReduceFunction<Row,Row>> |
AggregateUtil$.createOperatorFunctionsForAggregates(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
TableConfig config)
Create Flink operator functions for aggregates.
|
Modifier and Type | Method and Description |
---|---|
abstract Object |
FloatingAvgAggregate.doEvaluate(Row buffer) |
Object |
ShortAvgAggregate.doEvaluate(Row buffer) |
Object |
FloatAvgAggregate.doEvaluate(Row buffer) |
Object |
DoubleAvgAggregate.doEvaluate(Row buffer) |
Object |
LongAvgAggregate.doEvaluate(Row buffer) |
Object |
IntAvgAggregate.doEvaluate(Row buffer) |
Object |
ByteAvgAggregate.doEvaluate(Row buffer) |
abstract Object |
IntegralAvgAggregate.doEvaluate(Row buffer) |
abstract void |
FloatingAvgAggregate.doPrepare(Object value,
Row partial) |
void |
ShortAvgAggregate.doPrepare(Object value,
Row partial) |
void |
FloatAvgAggregate.doPrepare(Object value,
Row partial) |
void |
DoubleAvgAggregate.doPrepare(Object value,
Row partial) |
void |
LongAvgAggregate.doPrepare(Object value,
Row partial) |
void |
IntAvgAggregate.doPrepare(Object value,
Row partial) |
void |
ByteAvgAggregate.doPrepare(Object value,
Row partial) |
abstract void |
IntegralAvgAggregate.doPrepare(Object value,
Row partial) |
T |
FloatingAvgAggregate.evaluate(Row buffer) |
long |
CountAggregate.evaluate(Row buffer) |
T |
MaxAggregate.evaluate(Row buffer)
Return the final aggregated result based on aggregate buffer.
|
BigDecimal |
DecimalAvgAggregate.evaluate(Row buffer) |
BigDecimal |
DecimalMinAggregate.evaluate(Row buffer) |
BigDecimal |
DecimalSumAggregate.evaluate(Row buffer) |
T |
SumAggregate.evaluate(Row buffer) |
T |
MinAggregate.evaluate(Row buffer)
Return the final aggregated result based on aggregate buffer.
|
T |
Aggregate.evaluate(Row buffer)
Calculate the final aggregated result based on aggregate buffer.
|
T |
IntegralAvgAggregate.evaluate(Row buffer) |
BigDecimal |
DecimalMaxAggregate.evaluate(Row buffer) |
void |
FloatingAvgAggregate.initiate(Row partial) |
void |
CountAggregate.initiate(Row intermediate) |
void |
MaxAggregate.initiate(Row intermediate)
Initiate the intermediate aggregate value in Row.
|
void |
DecimalAvgAggregate.initiate(Row partial) |
void |
DecimalMinAggregate.initiate(Row intermediate) |
void |
DecimalSumAggregate.initiate(Row partial) |
void |
SumAggregate.initiate(Row partial) |
void |
MinAggregate.initiate(Row intermediate)
Initiate the intermediate aggregate value in Row.
|
void |
LongAvgAggregate.initiate(Row partial) |
void |
Aggregate.initiate(Row intermediate)
Initiate the intermediate aggregate value in Row.
|
void |
IntegralAvgAggregate.initiate(Row partial) |
void |
DecimalMaxAggregate.initiate(Row intermediate) |
void |
FloatingAvgAggregate.merge(Row partial,
Row buffer) |
void |
CountAggregate.merge(Row intermediate,
Row buffer) |
void |
MaxAggregate.merge(Row intermediate,
Row buffer)
Accessed in CombineFunction and GroupReduceFunction, merge partial
aggregate result into aggregate buffer.
|
void |
DecimalAvgAggregate.merge(Row partial,
Row buffer) |
void |
DecimalMinAggregate.merge(Row partial,
Row buffer) |
void |
DecimalSumAggregate.merge(Row partial1,
Row buffer) |
void |
SumAggregate.merge(Row partial1,
Row buffer) |
void |
MinAggregate.merge(Row partial,
Row buffer)
Accessed in CombineFunction and GroupReduceFunction, merge partial
aggregate result into aggregate buffer.
|
void |
LongAvgAggregate.merge(Row partial,
Row buffer) |
void |
Aggregate.merge(Row intermediate,
Row buffer)
Merge intermediate aggregate data into aggregate buffer.
|
void |
IntegralAvgAggregate.merge(Row partial,
Row buffer) |
void |
DecimalMaxAggregate.merge(Row partial,
Row buffer) |
void |
FloatingAvgAggregate.prepare(Object value,
Row partial) |
void |
CountAggregate.prepare(Object value,
Row intermediate) |
void |
MaxAggregate.prepare(Object value,
Row intermediate)
Accessed in MapFunction, prepare the input of partial aggregate.
|
void |
DecimalAvgAggregate.prepare(Object value,
Row partial) |
void |
DecimalMinAggregate.prepare(Object value,
Row partial) |
void |
DecimalSumAggregate.prepare(Object value,
Row partial) |
void |
SumAggregate.prepare(Object value,
Row partial) |
void |
MinAggregate.prepare(Object value,
Row partial)
Accessed in MapFunction, prepare the input of partial aggregate.
|
void |
LongAvgAggregate.prepare(Object value,
Row partial) |
void |
Aggregate.prepare(Object value,
Row intermediate)
Transform the aggregate field value into intermediate aggregate data.
|
void |
IntegralAvgAggregate.prepare(Object value,
Row partial) |
void |
DecimalMaxAggregate.prepare(Object value,
Row partial) |
Modifier and Type | Method and Description |
---|---|
Row |
AggregateReduceCombineFunction.combine(Iterable<Row> records)
For sub-grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
|
void |
AggregateReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out)
For grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
void |
AggregateReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out)
For grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
void |
AggregateReduceCombineFunction.reduce(Iterable<Row> records,
Collector<Row> out)
For grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate Row and output Row.
|
void |
AggregateReduceCombineFunction.reduce(Iterable<Row> records,
Collector<Row> out)
For grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate Row and output Row.
|
Modifier and Type | Method and Description |
---|---|
Row |
RowCsvInputFormat.fillRecord(Row reuse,
Object[] parsedValues) |
Row |
ValuesInputFormat.nextRecord(Row reuse) |
Modifier and Type | Method and Description |
---|---|
scala.collection.Seq<Row> |
ValuesInputFormat.rows() |
Modifier and Type | Method and Description |
---|---|
Row |
RowCsvInputFormat.fillRecord(Row reuse,
Object[] parsedValues) |
Row |
ValuesInputFormat.nextRecord(Row reuse) |
Constructor and Description |
---|
ValuesInputFormat(scala.collection.Seq<Row> rows) |
Modifier and Type | Method and Description |
---|---|
protected TableSink<Row> |
CsvTableSink.copy() |
TypeInformation<Row> |
CsvTableSink.getOutputType() |
Modifier and Type | Method and Description |
---|---|
String |
CsvFormatter.map(Row row) |
Modifier and Type | Method and Description |
---|---|
void |
CsvTableSink.emitDataSet(DataSet<Row> dataSet) |
void |
CsvTableSink.emitDataStream(DataStream<Row> dataStream) |
Modifier and Type | Method and Description |
---|---|
DataSet<Row> |
CsvTableSource.getDataSet(ExecutionEnvironment execEnv)
Returns the data of the table as a
DataSet of Row . |
DataStream<Row> |
CsvTableSource.getDataStream(StreamExecutionEnvironment streamExecEnv)
Returns the data of the table as a
DataStream of Row . |
Modifier and Type | Method and Description |
---|---|
Row |
RowSerializer.copy(Row from) |
Row |
RowSerializer.copy(Row from,
Row reuse) |
Row |
RowSerializer.createInstance() |
Row |
RowSerializer.deserialize(DataInputView source) |
Row |
RowSerializer.deserialize(Row reuse,
DataInputView source) |
Row |
RowComparator.readWithKeyDenormalization(Row reuse,
DataInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeComparator<Row> |
RowTypeInfo.createComparator(int[] logicalKeyFields,
boolean[] orders,
int logicalFieldOffset,
ExecutionConfig config) |
TypeSerializer<Row> |
RowTypeInfo.createSerializer(ExecutionConfig executionConfig) |
CompositeType.TypeComparatorBuilder<Row> |
RowTypeInfo.createTypeComparatorBuilder() |
TypeComparator<Row> |
RowComparator.duplicate() |
Modifier and Type | Method and Description |
---|---|
int |
RowComparator.compare(Row first,
Row second) |
Row |
RowSerializer.copy(Row from) |
Row |
RowSerializer.copy(Row from,
Row reuse) |
Row |
RowSerializer.deserialize(Row reuse,
DataInputView source) |
boolean |
RowComparator.equalToReference(Row candidate) |
int |
RowComparator.hash(Row value) |
void |
RowComparator.putNormalizedKey(Row record,
MemorySegment target,
int offset,
int numBytes) |
Row |
RowComparator.readWithKeyDenormalization(Row reuse,
DataInputView source) |
void |
RowSerializer.serialize(Row value,
DataOutputView target) |
void |
RowComparator.setReference(Row toCompare) |
static void |
NullMaskUtils.writeNullMask(int len,
Row value,
DataOutputView target) |
void |
NullMaskUtils$.writeNullMask(int len,
Row value,
DataOutputView target) |
void |
RowComparator.writeWithKeyNormalization(Row record,
DataOutputView target) |
Modifier and Type | Method and Description |
---|---|
int |
RowComparator.compareToReference(TypeComparator<Row> referencedComparator) |
Constructor and Description |
---|
Kafka08TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
Class<?>[] fieldTypes)
Creates a Kafka 0.8
StreamTableSource . |
Kafka08TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
TypeInformation<?>[] fieldTypes)
Creates a Kafka 0.8
StreamTableSource . |
Kafka09TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
Class<?>[] fieldTypes)
Creates a Kafka 0.9
StreamTableSource . |
Kafka09TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
TypeInformation<?>[] fieldTypes)
Creates a Kafka 0.9
StreamTableSource . |
Modifier and Type | Method and Description |
---|---|
Row |
JsonRowDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
JsonRowDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
JsonRowDeserializationSchema.isEndOfStream(Row nextElement) |
Copyright © 2014–2017 The Apache Software Foundation. All rights reserved.