Package | Description |
---|---|
org.apache.flink.connectors.hive | |
org.apache.flink.connectors.hive.read | |
org.apache.flink.connectors.hive.util |
Modifier and Type | Method and Description |
---|---|
HiveTablePartition |
HiveTableSource.HiveContinuousPartitionFetcherContext.toHiveTablePartition(org.apache.hadoop.hive.metastore.api.Partition partition)
Convert partition to HiveTablePartition.
|
Modifier and Type | Method and Description |
---|---|
static List<HiveSourceSplit> |
HiveSourceFileEnumerator.createInputSplits(int minNumSplits,
List<HiveTablePartition> partitions,
org.apache.hadoop.mapred.JobConf jobConf) |
static int |
HiveSourceFileEnumerator.getNumFiles(List<HiveTablePartition> partitions,
org.apache.hadoop.mapred.JobConf jobConf) |
Constructor and Description |
---|
HiveSourceFileEnumerator(List<HiveTablePartition> partitions,
org.apache.hadoop.mapred.JobConf jobConf) |
Provider(List<HiveTablePartition> partitions,
JobConfWrapper jobConfWrapper) |
Modifier and Type | Field and Description |
---|---|
protected HiveTablePartition |
HiveTableInputSplit.hiveTablePartition |
protected HiveTablePartition |
HiveSourceSplit.hiveTablePartition |
Modifier and Type | Method and Description |
---|---|
HiveTablePartition |
HiveTableInputSplit.getHiveTablePartition() |
HiveTablePartition |
HiveSourceSplit.getHiveTablePartition() |
HiveTablePartition |
HivePartitionFetcherContextBase.toHiveTablePartition(P partition) |
HiveTablePartition |
HivePartitionContext.toHiveTablePartition(P partition)
Convert partition to
HiveTablePartition . |
Modifier and Type | Method and Description |
---|---|
static HiveTableInputSplit[] |
HiveTableInputFormat.createInputSplits(int minNumSplits,
List<HiveTablePartition> partitions,
org.apache.hadoop.mapred.JobConf jobConf) |
void |
HiveInputFormatPartitionReader.open(List<HiveTablePartition> partitions) |
Constructor and Description |
---|
HiveSourceSplit(org.apache.hadoop.mapred.FileSplit fileSplit,
HiveTablePartition hiveTablePartition,
CheckpointedPosition readerPosition) |
HiveSourceSplit(String id,
Path filePath,
long offset,
long length,
String[] hostnames,
CheckpointedPosition readerPosition,
HiveTablePartition hiveTablePartition) |
HiveTableFileInputFormat(HiveTableInputFormat inputFormat,
HiveTablePartition hiveTablePartition) |
HiveTableInputSplit(int splitNumber,
org.apache.hadoop.mapred.InputSplit hInputSplit,
org.apache.hadoop.mapred.JobConf jobconf,
HiveTablePartition hiveTablePartition) |
Constructor and Description |
---|
HiveTableInputFormat(org.apache.hadoop.mapred.JobConf jobConf,
List<String> partitionKeys,
DataType[] fieldTypes,
String[] fieldNames,
int[] projectedFields,
Long limit,
String hiveVersion,
boolean useMapRedReader,
List<HiveTablePartition> partitions) |
Modifier and Type | Method and Description |
---|---|
static HiveTablePartition |
HivePartitionUtils.toHiveTablePartition(List<String> partitionKeys,
Properties tableProps,
org.apache.hadoop.hive.metastore.api.Partition partition) |
Modifier and Type | Method and Description |
---|---|
static List<HiveTablePartition> |
HivePartitionUtils.getAllPartitions(org.apache.hadoop.mapred.JobConf jobConf,
String hiveVersion,
ObjectPath tablePath,
CatalogTable catalogTable,
HiveShim hiveShim,
List<Map<String,String>> remainingPartitions)
Returns all HiveTablePartitions of a hive table, returns single HiveTablePartition if the
hive table is not partitioned.
|
Copyright © 2014–2022 The Apache Software Foundation. All rights reserved.