Modifier and Type | Class and Description |
---|---|
class |
AggregateDataSet<T>
The result of
DataSet.aggregate . |
class |
CoGroupDataSet<L,R>
A specific
DataSet that results from a coGroup operation. |
class |
CrossDataSet<L,R>
A specific
DataSet that results from a cross operation. |
class |
JoinDataSet<L,R>
A specific
DataSet that results from a join operation. |
class |
PartitionSortedDataSet<T>
The result of
DataSet.sortPartition . |
Modifier and Type | Method and Description |
---|---|
<O> DataSet<O> |
CoGroupDataSet.apply(CoGroupFunction<L,R,O> coGrouper,
TypeInformation<O> evidence$5,
scala.reflect.ClassTag<O> evidence$6)
Creates a new
DataSet by passing each pair of co-grouped element lists to the given
function. |
<O> DataSet<O> |
CrossDataSet.apply(CrossFunction<L,R,O> crosser,
TypeInformation<O> evidence$3,
scala.reflect.ClassTag<O> evidence$4)
Creates a new
DataSet by passing each pair of values to the given function. |
<O> DataSet<O> |
JoinDataSet.apply(FlatJoinFunction<L,R,O> joiner,
TypeInformation<O> evidence$5,
scala.reflect.ClassTag<O> evidence$6)
Creates a new
DataSet by passing each pair of joined values to the given function. |
<O> DataSet<O> |
JoinFunctionAssigner.apply(FlatJoinFunction<L,R,O> fun,
TypeInformation<O> evidence$24,
scala.reflect.ClassTag<O> evidence$25) |
<O> DataSet<O> |
CoGroupDataSet.apply(scala.Function2<scala.collection.Iterator<L>,scala.collection.Iterator<R>,O> fun,
TypeInformation<O> evidence$1,
scala.reflect.ClassTag<O> evidence$2)
Creates a new
DataSet where the result for each pair of co-grouped element lists is the
result of the given function. |
<O> DataSet<O> |
CrossDataSet.apply(scala.Function2<L,R,O> fun,
TypeInformation<O> evidence$1,
scala.reflect.ClassTag<O> evidence$2)
Creates a new
DataSet where the result for each pair of elements is the result
of the given function. |
<O> DataSet<O> |
JoinDataSet.apply(scala.Function2<L,R,O> fun,
TypeInformation<O> evidence$1,
scala.reflect.ClassTag<O> evidence$2)
Creates a new
DataSet where the result for each pair of joined elements is the result
of the given function. |
<O> DataSet<O> |
JoinFunctionAssigner.apply(scala.Function2<L,R,O> fun,
TypeInformation<O> evidence$20,
scala.reflect.ClassTag<O> evidence$21) |
<O> DataSet<O> |
CoGroupDataSet.apply(scala.Function3<scala.collection.Iterator<L>,scala.collection.Iterator<R>,Collector<O>,scala.runtime.BoxedUnit> fun,
TypeInformation<O> evidence$3,
scala.reflect.ClassTag<O> evidence$4)
Creates a new
DataSet where the result for each pair of co-grouped element lists is the
result of the given function. |
<O> DataSet<O> |
JoinDataSet.apply(scala.Function3<L,R,Collector<O>,scala.runtime.BoxedUnit> fun,
TypeInformation<O> evidence$3,
scala.reflect.ClassTag<O> evidence$4)
Creates a new
DataSet by passing each pair of joined values to the given function. |
<O> DataSet<O> |
JoinFunctionAssigner.apply(scala.Function3<L,R,Collector<O>,scala.runtime.BoxedUnit> fun,
TypeInformation<O> evidence$22,
scala.reflect.ClassTag<O> evidence$23) |
<O> DataSet<O> |
JoinDataSet.apply(JoinFunction<L,R,O> fun,
TypeInformation<O> evidence$7,
scala.reflect.ClassTag<O> evidence$8)
Creates a new
DataSet by passing each pair of joined values to the given function. |
<O> DataSet<O> |
JoinFunctionAssigner.apply(JoinFunction<L,R,O> fun,
TypeInformation<O> evidence$26,
scala.reflect.ClassTag<O> evidence$27) |
<R> DataSet<R> |
GroupedDataSet.combineGroup(scala.Function2<scala.collection.Iterator<T>,Collector<R>,scala.runtime.BoxedUnit> fun,
TypeInformation<R> evidence$10,
scala.reflect.ClassTag<R> evidence$11)
Applies a CombineFunction on a grouped
DataSet . |
<R> DataSet<R> |
DataSet.combineGroup(scala.Function2<scala.collection.Iterator<T>,Collector<R>,scala.runtime.BoxedUnit> fun,
TypeInformation<R> evidence$26,
scala.reflect.ClassTag<R> evidence$27)
Applies a GroupCombineFunction on a grouped
DataSet . |
<R> DataSet<R> |
GroupedDataSet.combineGroup(GroupCombineFunction<T,R> combiner,
TypeInformation<R> evidence$12,
scala.reflect.ClassTag<R> evidence$13)
Applies a CombineFunction on a grouped
DataSet . |
<R> DataSet<R> |
DataSet.combineGroup(GroupCombineFunction<T,R> combiner,
TypeInformation<R> evidence$24,
scala.reflect.ClassTag<R> evidence$25)
Applies a GroupCombineFunction on a grouped
DataSet . |
<K,V> DataSet<scala.Tuple2<K,V>> |
ExecutionEnvironment.createHadoopInput(org.apache.hadoop.mapred.InputFormat<K,V> mapredInputFormat,
Class<K> key,
Class<V> value,
org.apache.hadoop.mapred.JobConf job,
TypeInformation<scala.Tuple2<K,V>> tpe)
Creates a
DataSet from the given InputFormat . |
<K,V> DataSet<scala.Tuple2<K,V>> |
ExecutionEnvironment.createHadoopInput(org.apache.hadoop.mapreduce.InputFormat<K,V> mapreduceInputFormat,
Class<K> key,
Class<V> value,
org.apache.hadoop.mapreduce.Job job,
TypeInformation<scala.Tuple2<K,V>> tpe)
Creates a
DataSet from the given InputFormat . |
<T> DataSet<T> |
ExecutionEnvironment.createInput(InputFormat<T,?> inputFormat,
scala.reflect.ClassTag<T> evidence$7,
TypeInformation<T> evidence$8)
Generic method to create an input DataSet with an
InputFormat . |
DataSet<T> |
DataSet.distinct()
Returns a distinct set of this DataSet.
|
<K> DataSet<T> |
DataSet.distinct(scala.Function1<T,K> fun,
TypeInformation<K> evidence$28)
Creates a new DataSet containing the distinct elements of this DataSet.
|
DataSet<T> |
DataSet.distinct(scala.collection.Seq<Object> fields)
Returns a distinct set of a tuple DataSet using field position keys.
|
DataSet<T> |
DataSet.distinct(String firstField,
scala.collection.Seq<String> otherFields)
Returns a distinct set of this DataSet using expression keys.
|
DataSet<T> |
DataSet.filter(FilterFunction<T> filter)
Creates a new DataSet that contains only the elements satisfying the given filter predicate.
|
DataSet<T> |
DataSet.filter(scala.Function1<T,Object> fun)
Creates a new DataSet that contains only the elements satisfying the given filter predicate.
|
DataSet<T> |
GroupedDataSet.first(int n)
Creates a new DataSet containing the first
n elements of each group of this DataSet. |
DataSet<T> |
DataSet.first(int n)
Creates a new DataSet containing the first
n elements of this DataSet. |
<R> DataSet<R> |
DataSet.flatMap(FlatMapFunction<T,R> flatMapper,
TypeInformation<R> evidence$12,
scala.reflect.ClassTag<R> evidence$13)
Creates a new DataSet by applying the given function to every element and flattening
the results.
|
<R> DataSet<R> |
DataSet.flatMap(scala.Function1<T,scala.collection.TraversableOnce<R>> fun,
TypeInformation<R> evidence$16,
scala.reflect.ClassTag<R> evidence$17)
Creates a new DataSet by applying the given function to every element and flattening
the results.
|
<R> DataSet<R> |
DataSet.flatMap(scala.Function2<T,Collector<R>,scala.runtime.BoxedUnit> fun,
TypeInformation<R> evidence$14,
scala.reflect.ClassTag<R> evidence$15)
Creates a new DataSet by applying the given function to every element and flattening
the results.
|
<T> DataSet<T> |
ExecutionEnvironment.fromCollection(scala.collection.Iterable<T> data,
scala.reflect.ClassTag<T> evidence$10,
TypeInformation<T> evidence$11)
Creates a DataSet from the given non-empty
Iterable . |
<T> DataSet<T> |
ExecutionEnvironment.fromCollection(scala.collection.Iterator<T> data,
scala.reflect.ClassTag<T> evidence$12,
TypeInformation<T> evidence$13)
Creates a DataSet from the given
Iterator . |
<T> DataSet<T> |
ExecutionEnvironment.fromElements(scala.collection.Seq<T> data,
scala.reflect.ClassTag<T> evidence$14,
TypeInformation<T> evidence$15)
Creates a new data set that contains the given elements.
|
<T> DataSet<T> |
ExecutionEnvironment.fromParallelCollection(SplittableIterator<T> iterator,
scala.reflect.ClassTag<T> evidence$16,
TypeInformation<T> evidence$17)
Creates a new data set that contains elements in the iterator.
|
DataSet<Object> |
ExecutionEnvironment.generateSequence(long from,
long to)
Creates a new data set that contains a sequence of numbers.
|
DataSet<T> |
DataSet.iterate(int maxIterations,
scala.Function1<DataSet<T>,DataSet<T>> stepFunction)
Creates a new DataSet by performing bulk iterations using the given step function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$32)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$31)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$34)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$33)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
DataSet<T> |
DataSet.iterateWithTermination(int maxIterations,
scala.Function1<DataSet<T>,scala.Tuple2<DataSet<T>,DataSet<?>>> stepFunction)
Creates a new DataSet by performing bulk iterations using the given step function.
|
DataSet<L> |
UnfinishedKeyPairOperation.leftInput() |
<R> DataSet<R> |
DataSet.map(scala.Function1<T,R> fun,
TypeInformation<R> evidence$4,
scala.reflect.ClassTag<R> evidence$5)
Creates a new DataSet by applying the given function to every element of this DataSet.
|
<R> DataSet<R> |
DataSet.map(MapFunction<T,R> mapper,
TypeInformation<R> evidence$2,
scala.reflect.ClassTag<R> evidence$3)
Creates a new DataSet by applying the given function to every element of this DataSet.
|
<R> DataSet<R> |
DataSet.mapPartition(scala.Function1<scala.collection.Iterator<T>,scala.collection.TraversableOnce<R>> fun,
TypeInformation<R> evidence$10,
scala.reflect.ClassTag<R> evidence$11)
Creates a new DataSet by applying the given function to each parallel partition of the
DataSet.
|
<R> DataSet<R> |
DataSet.mapPartition(scala.Function2<scala.collection.Iterator<T>,Collector<R>,scala.runtime.BoxedUnit> fun,
TypeInformation<R> evidence$8,
scala.reflect.ClassTag<R> evidence$9)
Creates a new DataSet by applying the given function to each parallel partition of the
DataSet.
|
<R> DataSet<R> |
DataSet.mapPartition(MapPartitionFunction<T,R> partitionMapper,
TypeInformation<R> evidence$6,
scala.reflect.ClassTag<R> evidence$7)
Creates a new DataSet by applying the given function to each parallel partition of the
DataSet.
|
DataSet<T> |
GroupedDataSet.maxBy(scala.collection.Seq<Object> fields)
Applies a special case of a reduce transformation
maxBy on a grouped DataSet
The transformation consecutively calls a ReduceFunction
until only a single element remains which is the result of the transformation. |
DataSet<T> |
DataSet.maxBy(scala.collection.Seq<Object> fields)
Selects an element with maximum value.
|
DataSet<T> |
GroupedDataSet.minBy(scala.collection.Seq<Object> fields)
Applies a special case of a reduce transformation
minBy on a grouped DataSet . |
DataSet<T> |
DataSet.minBy(scala.collection.Seq<Object> fields)
Selects an element with minimum value.
|
DataSet<T> |
DataSet.name(String name)
Sets the name of the DataSet.
|
<K> DataSet<T> |
DataSet.partitionByHash(scala.Function1<T,K> fun,
TypeInformation<K> evidence$35)
Partitions a DataSet using the specified key selector function.
|
DataSet<T> |
DataSet.partitionByHash(scala.collection.Seq<Object> fields)
Hash-partitions a DataSet on the specified tuple field positions.
|
DataSet<T> |
DataSet.partitionByHash(String firstField,
scala.collection.Seq<String> otherFields)
Hash-partitions a DataSet on the specified fields.
|
<K> DataSet<T> |
DataSet.partitionByRange(scala.Function1<T,K> fun,
TypeInformation<K> evidence$36)
Range-partitions a DataSet using the specified key selector function.
|
DataSet<T> |
DataSet.partitionByRange(scala.collection.Seq<Object> fields)
Range-partitions a DataSet on the specified tuple field positions.
|
DataSet<T> |
DataSet.partitionByRange(String firstField,
scala.collection.Seq<String> otherFields)
Range-partitions a DataSet on the specified fields.
|
<K> DataSet<T> |
DataSet.partitionCustom(Partitioner<K> partitioner,
scala.Function1<T,K> fun,
TypeInformation<K> evidence$39)
Partitions a DataSet on the key returned by the selector, using a custom partitioner.
|
<K> DataSet<T> |
DataSet.partitionCustom(Partitioner<K> partitioner,
int field,
TypeInformation<K> evidence$37)
Partitions a tuple DataSet on the specified key fields using a custom partitioner.
|
<K> DataSet<T> |
DataSet.partitionCustom(Partitioner<K> partitioner,
String field,
TypeInformation<K> evidence$38)
Partitions a POJO DataSet on the specified key fields using a custom partitioner.
|
<T> DataSet<T> |
ExecutionEnvironment.readCsvFile(String filePath,
String lineDelimiter,
String fieldDelimiter,
Character quoteCharacter,
boolean ignoreFirstLine,
String ignoreComments,
boolean lenient,
int[] includedFields,
String[] pojoFields,
scala.reflect.ClassTag<T> evidence$1,
TypeInformation<T> evidence$2)
Creates a DataSet by reading the given CSV file.
|
<T> DataSet<T> |
ExecutionEnvironment.readFile(FileInputFormat<T> inputFormat,
String filePath,
scala.reflect.ClassTag<T> evidence$5,
TypeInformation<T> evidence$6)
Creates a new DataSource by reading the specified file using the custom
FileInputFormat . |
<T> DataSet<T> |
ExecutionEnvironment.readFileOfPrimitives(String filePath,
String delimiter,
scala.reflect.ClassTag<T> evidence$3,
TypeInformation<T> evidence$4)
Creates a DataSet that represents the primitive type produced by reading the
given file in delimited way.This method is similar to
readCsvFile with
single field, but it produces a DataSet not through Tuple. |
<K,V> DataSet<scala.Tuple2<K,V>> |
ExecutionEnvironment.readHadoopFile(org.apache.hadoop.mapred.FileInputFormat<K,V> mapredInputFormat,
Class<K> key,
Class<V> value,
String inputPath,
org.apache.hadoop.mapred.JobConf job,
TypeInformation<scala.Tuple2<K,V>> tpe)
Creates a
DataSet from the given FileInputFormat . |
<K,V> DataSet<scala.Tuple2<K,V>> |
ExecutionEnvironment.readHadoopFile(org.apache.hadoop.mapreduce.lib.input.FileInputFormat<K,V> mapreduceInputFormat,
Class<K> key,
Class<V> value,
String inputPath,
org.apache.hadoop.mapreduce.Job job,
TypeInformation<scala.Tuple2<K,V>> tpe)
Creates a
DataSet from the given FileInputFormat . |
<K,V> DataSet<scala.Tuple2<K,V>> |
ExecutionEnvironment.readHadoopFile(org.apache.hadoop.mapred.FileInputFormat<K,V> mapredInputFormat,
Class<K> key,
Class<V> value,
String inputPath,
TypeInformation<scala.Tuple2<K,V>> tpe)
Creates a
DataSet from the given FileInputFormat . |
<K,V> DataSet<scala.Tuple2<K,V>> |
ExecutionEnvironment.readHadoopFile(org.apache.hadoop.mapreduce.lib.input.FileInputFormat<K,V> mapreduceInputFormat,
Class<K> key,
Class<V> value,
String inputPath,
TypeInformation<scala.Tuple2<K,V>> tpe)
Creates a
DataSet from the given
FileInputFormat . |
<K,V> DataSet<scala.Tuple2<K,V>> |
ExecutionEnvironment.readSequenceFile(Class<K> key,
Class<V> value,
String inputPath,
TypeInformation<scala.Tuple2<K,V>> tpe)
|
DataSet<String> |
ExecutionEnvironment.readTextFile(String filePath,
String charsetName)
Creates a DataSet of Strings produced by reading the given file line wise.
|
DataSet<StringValue> |
ExecutionEnvironment.readTextFileWithValue(String filePath,
String charsetName)
Creates a DataSet of Strings produced by reading the given file line wise.
|
DataSet<T> |
DataSet.rebalance()
Enforces a re-balancing of the DataSet, i.e., the DataSet is evenly distributed over all
parallel instances of the
following task.
|
DataSet<T> |
GroupedDataSet.reduce(scala.Function2<T,T,T> fun)
Creates a new
DataSet by merging the elements of each group (elements with the same key)
using an associative reduce function. |
DataSet<T> |
DataSet.reduce(scala.Function2<T,T,T> fun)
Creates a new
DataSet by merging the elements of this DataSet using an associative reduce
function. |
DataSet<T> |
GroupedDataSet.reduce(scala.Function2<T,T,T> fun,
ReduceOperatorBase.CombineHint strategy)
Special
reduce operation for explicitly telling the system what strategy to use for the
combine phase. |
DataSet<T> |
GroupedDataSet.reduce(ReduceFunction<T> reducer)
Creates a new
DataSet by merging the elements of each group (elements with the same key)
using an associative reduce function. |
DataSet<T> |
DataSet.reduce(ReduceFunction<T> reducer)
Creates a new
DataSet by merging the elements of this DataSet using an associative reduce
function. |
DataSet<T> |
GroupedDataSet.reduce(ReduceFunction<T> reducer,
ReduceOperatorBase.CombineHint strategy)
Special
reduce operation for explicitly telling the system what strategy to use for the
combine phase. |
<R> DataSet<R> |
GroupedDataSet.reduceGroup(scala.Function1<scala.collection.Iterator<T>,R> fun,
TypeInformation<R> evidence$4,
scala.reflect.ClassTag<R> evidence$5)
Creates a new
DataSet by passing for each group (elements with the same key) the list
of elements to the group reduce function. |
<R> DataSet<R> |
DataSet.reduceGroup(scala.Function1<scala.collection.Iterator<T>,R> fun,
TypeInformation<R> evidence$22,
scala.reflect.ClassTag<R> evidence$23)
Creates a new
DataSet by passing all elements in this DataSet to the group reduce function. |
<R> DataSet<R> |
GroupedDataSet.reduceGroup(scala.Function2<scala.collection.Iterator<T>,Collector<R>,scala.runtime.BoxedUnit> fun,
TypeInformation<R> evidence$6,
scala.reflect.ClassTag<R> evidence$7)
Creates a new
DataSet by passing for each group (elements with the same key) the list
of elements to the group reduce function. |
<R> DataSet<R> |
DataSet.reduceGroup(scala.Function2<scala.collection.Iterator<T>,Collector<R>,scala.runtime.BoxedUnit> fun,
TypeInformation<R> evidence$20,
scala.reflect.ClassTag<R> evidence$21)
Creates a new
DataSet by passing all elements in this DataSet to the group reduce function. |
<R> DataSet<R> |
GroupedDataSet.reduceGroup(GroupReduceFunction<T,R> reducer,
TypeInformation<R> evidence$8,
scala.reflect.ClassTag<R> evidence$9)
Creates a new
DataSet by passing for each group (elements with the same key) the list
of elements to the GroupReduceFunction . |
<R> DataSet<R> |
DataSet.reduceGroup(GroupReduceFunction<T,R> reducer,
TypeInformation<R> evidence$18,
scala.reflect.ClassTag<R> evidence$19)
Creates a new
DataSet by passing all elements in this DataSet to the group reduce function. |
DataSet<T> |
DataSet.registerAggregator(String name,
Aggregator<?> aggregator)
Registers an
Aggregator
for the iteration. |
DataSet<R> |
UnfinishedKeyPairOperation.rightInput() |
DataSet<T> |
DataSet.setParallelism(int parallelism)
Sets the parallelism of this operation.
|
<K> DataSet<T> |
PartitionSortedDataSet.sortPartition(scala.Function1<T,K> fun,
Order order,
TypeInformation<K> evidence$2) |
<K> DataSet<T> |
DataSet.sortPartition(scala.Function1<T,K> fun,
Order order,
TypeInformation<K> evidence$40)
Locally sorts the partitions of the DataSet on the extracted key in the specified order.
|
DataSet<T> |
PartitionSortedDataSet.sortPartition(int field,
Order order)
Appends the given field and order to the sort-partition operator.
|
DataSet<T> |
DataSet.sortPartition(int field,
Order order)
Locally sorts the partitions of the DataSet on the specified field in the specified order.
|
DataSet<T> |
PartitionSortedDataSet.sortPartition(String field,
Order order)
Appends the given field and order to the sort-partition operator.
|
DataSet<T> |
DataSet.sortPartition(String field,
Order order)
Locally sorts the partitions of the DataSet on the specified field in the specified order.
|
DataSet<T> |
DataSet.union(DataSet<T> other)
Creates a new DataSet containing the elements from both
this DataSet and the other
DataSet. |
<T> DataSet<T> |
ExecutionEnvironment.union(scala.collection.Seq<DataSet<T>> sets) |
DataSet<T> |
DataSet.withBroadcastSet(DataSet<?> data,
String name)
Adds a certain data set as a broadcast set to this operator.
|
DataSet<T> |
DataSet.withForwardedFields(scala.collection.Seq<String> forwardedFields) |
DataSet<T> |
DataSet.withForwardedFieldsFirst(scala.collection.Seq<String> forwardedFields) |
DataSet<T> |
DataSet.withForwardedFieldsSecond(scala.collection.Seq<String> forwardedFields) |
DataSet<T> |
DataSet.withParameters(Configuration parameters) |
Modifier and Type | Method and Description |
---|---|
<O> UnfinishedCoGroupOperation<T,O> |
DataSet.coGroup(DataSet<O> other,
scala.reflect.ClassTag<O> evidence$30)
For each key in
this DataSet and the other DataSet, create a tuple containing a list
of elements for that key from both DataSets. |
static <L,R> CrossDataSet<L,R> |
CrossDataSet.createCrossOperator(DataSet<L> leftInput,
DataSet<R> rightInput,
CrossOperatorBase.CrossHint crossHint)
Creates a default cross operation with Tuple2 as result.
|
static <L,R> CrossDataSet<L,R> |
CrossDataSet.createCrossOperator(DataSet<L> leftInput,
DataSet<R> rightInput,
CrossOperatorBase.CrossHint crossHint)
Creates a default cross operation with Tuple2 as result.
|
<L,R> CrossDataSet<L,R> |
CrossDataSet$.createCrossOperator(DataSet<L> leftInput,
DataSet<R> rightInput,
CrossOperatorBase.CrossHint crossHint)
Creates a default cross operation with Tuple2 as result.
|
<L,R> CrossDataSet<L,R> |
CrossDataSet$.createCrossOperator(DataSet<L> leftInput,
DataSet<R> rightInput,
CrossOperatorBase.CrossHint crossHint)
Creates a default cross operation with Tuple2 as result.
|
<O> CrossDataSet<T,O> |
DataSet.cross(DataSet<O> other)
Creates a new DataSet by forming the cartesian product of
this DataSet and the other
DataSet. |
<O> CrossDataSet<T,O> |
DataSet.crossWithHuge(DataSet<O> other)
Special
cross operation for explicitly telling the system that the left side is assumed
to be a lot smaller than the right side of the cartesian product. |
<O> CrossDataSet<T,O> |
DataSet.crossWithTiny(DataSet<O> other)
Special
cross operation for explicitly telling the system that the right side is assumed
to be a lot smaller than the left side of the cartesian product. |
<O> UnfinishedOuterJoinOperation<T,O> |
DataSet.fullOuterJoin(DataSet<O> other)
Creates a new DataSet by performing a full outer join of
this DataSet
with the other DataSet, by combining two elements of two DataSets on
key equality. |
<O> UnfinishedOuterJoinOperation<T,O> |
DataSet.fullOuterJoin(DataSet<O> other,
JoinOperatorBase.JoinHint strategy)
Special
fullOuterJoin operation for explicitly telling the system what join strategy to
use. |
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$32)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$31)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$34)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$33)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<O> UnfinishedJoinOperation<T,O> |
DataSet.join(DataSet<O> other)
Creates a new DataSet by joining
this DataSet with the other DataSet. |
<O> UnfinishedJoinOperation<T,O> |
DataSet.join(DataSet<O> other,
JoinOperatorBase.JoinHint strategy)
Special
join operation for explicitly telling the system what join strategy to use. |
<O> UnfinishedJoinOperation<T,O> |
DataSet.joinWithHuge(DataSet<O> other)
Special
join operation for explicitly telling the system that the left side is assumed
to be a lot smaller than the right side of the join. |
<O> UnfinishedJoinOperation<T,O> |
DataSet.joinWithTiny(DataSet<O> other)
Special
join operation for explicitly telling the system that the right side is assumed
to be a lot smaller than the left side of the join. |
<O> UnfinishedOuterJoinOperation<T,O> |
DataSet.leftOuterJoin(DataSet<O> other)
An outer join on the left side.
|
<O> UnfinishedOuterJoinOperation<T,O> |
DataSet.leftOuterJoin(DataSet<O> other,
JoinOperatorBase.JoinHint strategy)
An outer join on the left side.
|
<O> UnfinishedOuterJoinOperation<T,O> |
DataSet.rightOuterJoin(DataSet<O> other)
An outer join on the right side.
|
<O> UnfinishedOuterJoinOperation<T,O> |
DataSet.rightOuterJoin(DataSet<O> other,
JoinOperatorBase.JoinHint strategy)
An outer join on the right side.
|
DataSet<T> |
DataSet.union(DataSet<T> other)
Creates a new DataSet containing the elements from both
this DataSet and the other
DataSet. |
DataSet<T> |
DataSet.withBroadcastSet(DataSet<?> data,
String name)
Adds a certain data set as a broadcast set to this operator.
|
Modifier and Type | Method and Description |
---|---|
DataSet<T> |
DataSet.iterate(int maxIterations,
scala.Function1<DataSet<T>,DataSet<T>> stepFunction)
Creates a new DataSet by performing bulk iterations using the given step function.
|
DataSet<T> |
DataSet.iterate(int maxIterations,
scala.Function1<DataSet<T>,DataSet<T>> stepFunction)
Creates a new DataSet by performing bulk iterations using the given step function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$32)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$32)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$32)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$32)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$31)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$31)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$31)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
int[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$31)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$34)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$34)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$34)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
boolean solutionSetUnManaged,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$34)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$33)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$33)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$33)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
<R> DataSet<T> |
DataSet.iterateDelta(DataSet<R> workset,
int maxIterations,
String[] keyFields,
scala.Function2<DataSet<T>,DataSet<R>,scala.Tuple2<DataSet<T>,DataSet<R>>> stepFunction,
scala.reflect.ClassTag<R> evidence$33)
Creates a new DataSet by performing delta (or workset) iterations using the given step
function.
|
DataSet<T> |
DataSet.iterateWithTermination(int maxIterations,
scala.Function1<DataSet<T>,scala.Tuple2<DataSet<T>,DataSet<?>>> stepFunction)
Creates a new DataSet by performing bulk iterations using the given step function.
|
DataSet<T> |
DataSet.iterateWithTermination(int maxIterations,
scala.Function1<DataSet<T>,scala.Tuple2<DataSet<T>,DataSet<?>>> stepFunction)
Creates a new DataSet by performing bulk iterations using the given step function.
|
DataSet<T> |
DataSet.iterateWithTermination(int maxIterations,
scala.Function1<DataSet<T>,scala.Tuple2<DataSet<T>,DataSet<?>>> stepFunction)
Creates a new DataSet by performing bulk iterations using the given step function.
|
<T> DataSet<T> |
ExecutionEnvironment.union(scala.collection.Seq<DataSet<T>> sets) |
Constructor and Description |
---|
CoGroupDataSet(CoGroupOperator<L,R,scala.Tuple2<Object,Object>> defaultCoGroup,
DataSet<L> leftInput,
DataSet<R> rightInput,
Keys<L> leftKeys,
Keys<R> rightKeys) |
CoGroupDataSet(CoGroupOperator<L,R,scala.Tuple2<Object,Object>> defaultCoGroup,
DataSet<L> leftInput,
DataSet<R> rightInput,
Keys<L> leftKeys,
Keys<R> rightKeys) |
CrossDataSet(CrossOperator<L,R,scala.Tuple2<L,R>> defaultCross,
DataSet<L> leftInput,
DataSet<R> rightInput) |
CrossDataSet(CrossOperator<L,R,scala.Tuple2<L,R>> defaultCross,
DataSet<L> leftInput,
DataSet<R> rightInput) |
GroupedDataSet(DataSet<T> set,
Keys<T> keys,
scala.reflect.ClassTag<T> evidence$1) |
JoinDataSet(JoinOperator.EquiJoin<L,R,scala.Tuple2<L,R>> defaultJoin,
DataSet<L> leftInput,
DataSet<R> rightInput,
Keys<L> leftKeys,
Keys<R> rightKeys) |
JoinDataSet(JoinOperator.EquiJoin<L,R,scala.Tuple2<L,R>> defaultJoin,
DataSet<L> leftInput,
DataSet<R> rightInput,
Keys<L> leftKeys,
Keys<R> rightKeys) |
UnfinishedCoGroupOperation(DataSet<L> leftInput,
DataSet<R> rightInput,
scala.reflect.ClassTag<L> evidence$1,
scala.reflect.ClassTag<R> evidence$2) |
UnfinishedCoGroupOperation(DataSet<L> leftInput,
DataSet<R> rightInput,
scala.reflect.ClassTag<L> evidence$1,
scala.reflect.ClassTag<R> evidence$2) |
UnfinishedJoinOperation(DataSet<L> leftSet,
DataSet<R> rightSet,
JoinOperatorBase.JoinHint joinHint) |
UnfinishedJoinOperation(DataSet<L> leftSet,
DataSet<R> rightSet,
JoinOperatorBase.JoinHint joinHint) |
UnfinishedJoinOperationBase(DataSet<L> leftSet,
DataSet<R> rightSet,
JoinOperatorBase.JoinHint joinHint,
JoinType joinType) |
UnfinishedJoinOperationBase(DataSet<L> leftSet,
DataSet<R> rightSet,
JoinOperatorBase.JoinHint joinHint,
JoinType joinType) |
UnfinishedKeyPairOperation(DataSet<L> leftInput,
DataSet<R> rightInput) |
UnfinishedKeyPairOperation(DataSet<L> leftInput,
DataSet<R> rightInput) |
UnfinishedOuterJoinOperation(DataSet<L> leftSet,
DataSet<R> rightSet,
JoinOperatorBase.JoinHint joinHint,
JoinType joinType) |
UnfinishedOuterJoinOperation(DataSet<L> leftSet,
DataSet<R> rightSet,
JoinOperatorBase.JoinHint joinHint,
JoinType joinType) |
Modifier and Type | Method and Description |
---|---|
<R> DataSet<R> |
OnGroupedDataSet.combineGroupWith(scala.Function1<scala.collection.immutable.Stream<T>,R> fun,
TypeInformation<R> evidence$4,
scala.reflect.ClassTag<R> evidence$5)
Same as a reducing operation but only acts locally,
ideal to perform pre-aggregation before a reduction.
|
DataSet<T> |
OnDataSet.filterWith(scala.Function1<T,Object> fun)
Applies a predicate
fun to each item of the data set, keeping only those for which
the predicate holds |
<R> DataSet<R> |
OnDataSet.flatMapWith(scala.Function1<T,scala.collection.TraversableOnce<R>> fun,
TypeInformation<R> evidence$5,
scala.reflect.ClassTag<R> evidence$6)
Applies a function
fun to each item of the dataset, producing a collection of items
that will be flattened in the resulting data set |
<R> DataSet<R> |
OnDataSet.mapPartitionWith(scala.Function1<scala.collection.immutable.Stream<T>,R> fun,
TypeInformation<R> evidence$3,
scala.reflect.ClassTag<R> evidence$4)
Applies a function
fun to a partition as a whole |
<R> DataSet<R> |
OnDataSet.mapWith(scala.Function1<T,R> fun,
TypeInformation<R> evidence$1,
scala.reflect.ClassTag<R> evidence$2)
Applies a function
fun to each item of the data set |
<O> DataSet<O> |
OnCrossDataSet.projecting(scala.Function2<L,R,O> fun,
TypeInformation<O> evidence$1,
scala.reflect.ClassTag<O> evidence$2)
Starting from a cross data set, uses the function
fun to project elements from
both the input data sets in the resulting data set |
<O> DataSet<O> |
OnJoinFunctionAssigner.projecting(scala.Function2<L,R,O> fun,
TypeInformation<O> evidence$1,
scala.reflect.ClassTag<O> evidence$2)
Joins the data sets using the function
fun to project elements from both in the
resulting data set |
<O> DataSet<O> |
OnCoGroupDataSet.projecting(scala.Function2<scala.collection.immutable.Stream<L>,scala.collection.immutable.Stream<R>,O> fun,
TypeInformation<O> evidence$1,
scala.reflect.ClassTag<O> evidence$2)
Co-groups the data sets using the function
fun to project elements from both in
the resulting data set |
<R> DataSet<R> |
OnDataSet.reduceGroupWith(scala.Function1<scala.collection.immutable.Stream<T>,R> fun,
TypeInformation<R> evidence$7,
scala.reflect.ClassTag<R> evidence$8)
Applies a reducer
fun to a grouped data set |
<R> DataSet<R> |
OnGroupedDataSet.reduceGroupWith(scala.Function1<scala.collection.immutable.Stream<T>,R> fun,
TypeInformation<R> evidence$2,
scala.reflect.ClassTag<R> evidence$3)
Reduces the data set group-wise with a reducer
fun |
DataSet<T> |
OnDataSet.reduceWith(scala.Function2<T,T,T> fun)
Applies a reducer
fun to the data set |
DataSet<T> |
OnGroupedDataSet.reduceWith(scala.Function2<T,T,T> fun)
Reduces the whole data set with a reducer
fun |
Constructor and Description |
---|
OnDataSet(DataSet<T> ds) |
Modifier and Type | Method and Description |
---|---|
<T> DataSet<T> |
BatchTableEnvironment.toDataSet(Table table,
TypeInformation<T> evidence$1)
Converts the given
Table into a DataSet of a specified type. |
<T> DataSet<T> |
TableConversions.toDataSet(TypeInformation<T> evidence$1)
Converts the
Table to a DataSet of the specified type. |
Modifier and Type | Method and Description |
---|---|
<T> Table |
BatchTableEnvironment.fromDataSet(DataSet<T> dataSet)
Converts the given
DataSet into a Table . |
<T> Table |
BatchTableEnvironment.fromDataSet(DataSet<T> dataSet,
scala.collection.Seq<Expression> fields)
Converts the given
DataSet into a Table with specified field names. |
<T> void |
BatchTableEnvironment.registerDataSet(String name,
DataSet<T> dataSet)
Registers the given
DataSet as table in the
TableEnvironment 's catalog. |
<T> void |
BatchTableEnvironment.registerDataSet(String name,
DataSet<T> dataSet,
scala.collection.Seq<Expression> fields)
Registers the given
DataSet as table with specified field names in the
TableEnvironment 's catalog. |
Constructor and Description |
---|
DataSetConversions(DataSet<T> dataSet,
TypeInformation<T> inputType) |
Modifier and Type | Method and Description |
---|---|
DataSet<KMeans.Centroid> |
KMeans$.getCentroidDataSet(ParameterTool params,
ExecutionEnvironment env) |
static DataSet<KMeans.Centroid> |
KMeans.getCentroidDataSet(ParameterTool params,
ExecutionEnvironment env) |
DataSet<KMeans.Point> |
KMeans$.getPointDataSet(ParameterTool params,
ExecutionEnvironment env) |
static DataSet<KMeans.Point> |
KMeans.getPointDataSet(ParameterTool params,
ExecutionEnvironment env) |
Modifier and Type | Method and Description |
---|---|
DataSet<scala.Tuple2<K,LongValue>> |
Graph.getDegrees()
Return the degree of all vertices in the graph
|
DataSet<scala.Tuple2<K,K>> |
Graph.getEdgeIds() |
DataSet<Edge<K,EV>> |
Graph.getEdges() |
DataSet<scala.Tuple3<K,K,EV>> |
Graph.getEdgesAsTuple3() |
DataSet<Triplet<K,VV,EV>> |
Graph.getTriplets() |
DataSet<K> |
Graph.getVertexIds() |
DataSet<Vertex<K,VV>> |
Graph.getVertices() |
DataSet<scala.Tuple2<K,VV>> |
Graph.getVerticesAsTuple2() |
<T> DataSet<T> |
Graph.groupReduceOnEdges(EdgesFunction<K,EV,T> edgesFunction,
EdgeDirection direction,
TypeInformation<T> evidence$97,
scala.reflect.ClassTag<T> evidence$98)
Compute an aggregate over the edges of each vertex.
|
<T> DataSet<T> |
Graph.groupReduceOnEdges(EdgesFunctionWithVertexValue<K,VV,EV,T> edgesFunction,
EdgeDirection direction,
TypeInformation<T> evidence$95,
scala.reflect.ClassTag<T> evidence$96)
Compute an aggregate over the edges of each vertex.
|
<T> DataSet<T> |
Graph.groupReduceOnNeighbors(NeighborsFunction<K,VV,EV,T> neighborsFunction,
EdgeDirection direction,
TypeInformation<T> evidence$101,
scala.reflect.ClassTag<T> evidence$102)
Compute an aggregate over the neighbors (edges and vertices) of each
vertex.
|
<T> DataSet<T> |
Graph.groupReduceOnNeighbors(NeighborsFunctionWithVertexValue<K,VV,EV,T> neighborsFunction,
EdgeDirection direction,
TypeInformation<T> evidence$99,
scala.reflect.ClassTag<T> evidence$100)
Compute an aggregate over the neighbors (edges and vertices) of each
vertex.
|
DataSet<scala.Tuple2<K,LongValue>> |
Graph.inDegrees()
Return the in-degree of all vertices in the graph
|
DataSet<scala.Tuple2<K,LongValue>> |
Graph.outDegrees()
Return the out-degree of all vertices in the graph
|
DataSet<scala.Tuple2<K,EV>> |
Graph.reduceOnEdges(ReduceEdgesFunction<EV> reduceEdgesFunction,
EdgeDirection direction)
Compute a reduce transformation over the neighbors' vertex values of each vertex.
|
DataSet<scala.Tuple2<K,VV>> |
Graph.reduceOnNeighbors(ReduceNeighborsFunction<VV> reduceNeighborsFunction,
EdgeDirection direction)
Compute a reduce transformation over the neighbors' vertex values of each vertex.
|
Modifier and Type | Method and Description |
---|---|
<K,EV> Graph<K,NullValue,EV> |
Graph$.fromDataSet(DataSet<Edge<K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$7,
scala.reflect.ClassTag<K> evidence$8,
TypeInformation<EV> evidence$9,
scala.reflect.ClassTag<EV> evidence$10)
Creates a Graph from a DataSet of edges.
|
static <K,EV> Graph<K,NullValue,EV> |
Graph.fromDataSet(DataSet<Edge<K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$7,
scala.reflect.ClassTag<K> evidence$8,
TypeInformation<EV> evidence$9,
scala.reflect.ClassTag<EV> evidence$10)
Creates a Graph from a DataSet of edges.
|
<K,VV,EV> Graph<K,VV,EV> |
Graph$.fromDataSet(DataSet<Edge<K,EV>> edges,
MapFunction<K,VV> vertexValueInitializer,
ExecutionEnvironment env,
TypeInformation<K> evidence$11,
scala.reflect.ClassTag<K> evidence$12,
TypeInformation<VV> evidence$13,
scala.reflect.ClassTag<VV> evidence$14,
TypeInformation<EV> evidence$15,
scala.reflect.ClassTag<EV> evidence$16)
Creates a graph from a DataSet of edges.
|
static <K,VV,EV> Graph<K,VV,EV> |
Graph.fromDataSet(DataSet<Edge<K,EV>> edges,
MapFunction<K,VV> vertexValueInitializer,
ExecutionEnvironment env,
TypeInformation<K> evidence$11,
scala.reflect.ClassTag<K> evidence$12,
TypeInformation<VV> evidence$13,
scala.reflect.ClassTag<VV> evidence$14,
TypeInformation<EV> evidence$15,
scala.reflect.ClassTag<EV> evidence$16)
Creates a graph from a DataSet of edges.
|
<K,VV,EV> Graph<K,VV,EV> |
Graph$.fromDataSet(DataSet<Vertex<K,VV>> vertices,
DataSet<Edge<K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$1,
scala.reflect.ClassTag<K> evidence$2,
TypeInformation<VV> evidence$3,
scala.reflect.ClassTag<VV> evidence$4,
TypeInformation<EV> evidence$5,
scala.reflect.ClassTag<EV> evidence$6)
Creates a Graph from a DataSet of vertices and a DataSet of edges.
|
<K,VV,EV> Graph<K,VV,EV> |
Graph$.fromDataSet(DataSet<Vertex<K,VV>> vertices,
DataSet<Edge<K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$1,
scala.reflect.ClassTag<K> evidence$2,
TypeInformation<VV> evidence$3,
scala.reflect.ClassTag<VV> evidence$4,
TypeInformation<EV> evidence$5,
scala.reflect.ClassTag<EV> evidence$6)
Creates a Graph from a DataSet of vertices and a DataSet of edges.
|
static <K,VV,EV> Graph<K,VV,EV> |
Graph.fromDataSet(DataSet<Vertex<K,VV>> vertices,
DataSet<Edge<K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$1,
scala.reflect.ClassTag<K> evidence$2,
TypeInformation<VV> evidence$3,
scala.reflect.ClassTag<VV> evidence$4,
TypeInformation<EV> evidence$5,
scala.reflect.ClassTag<EV> evidence$6)
Creates a Graph from a DataSet of vertices and a DataSet of edges.
|
static <K,VV,EV> Graph<K,VV,EV> |
Graph.fromDataSet(DataSet<Vertex<K,VV>> vertices,
DataSet<Edge<K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$1,
scala.reflect.ClassTag<K> evidence$2,
TypeInformation<VV> evidence$3,
scala.reflect.ClassTag<VV> evidence$4,
TypeInformation<EV> evidence$5,
scala.reflect.ClassTag<EV> evidence$6)
Creates a Graph from a DataSet of vertices and a DataSet of edges.
|
<K> Graph<K,NullValue,NullValue> |
Graph$.fromTuple2DataSet(DataSet<scala.Tuple2<K,K>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$49,
scala.reflect.ClassTag<K> evidence$50)
Creates a Graph from a DataSet of Tuple2's representing the edges.
|
static <K> Graph<K,NullValue,NullValue> |
Graph.fromTuple2DataSet(DataSet<scala.Tuple2<K,K>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$49,
scala.reflect.ClassTag<K> evidence$50)
Creates a Graph from a DataSet of Tuple2's representing the edges.
|
<K,VV> Graph<K,VV,NullValue> |
Graph$.fromTuple2DataSet(DataSet<scala.Tuple2<K,K>> edges,
MapFunction<K,VV> vertexValueInitializer,
ExecutionEnvironment env,
TypeInformation<K> evidence$51,
scala.reflect.ClassTag<K> evidence$52,
TypeInformation<VV> evidence$53,
scala.reflect.ClassTag<VV> evidence$54)
Creates a Graph from a DataSet of Tuple2's representing the edges.
|
static <K,VV> Graph<K,VV,NullValue> |
Graph.fromTuple2DataSet(DataSet<scala.Tuple2<K,K>> edges,
MapFunction<K,VV> vertexValueInitializer,
ExecutionEnvironment env,
TypeInformation<K> evidence$51,
scala.reflect.ClassTag<K> evidence$52,
TypeInformation<VV> evidence$53,
scala.reflect.ClassTag<VV> evidence$54)
Creates a Graph from a DataSet of Tuple2's representing the edges.
|
<K,VV,EV> Graph<K,VV,EV> |
Graph$.fromTupleDataSet(DataSet<scala.Tuple2<K,VV>> vertices,
DataSet<scala.Tuple3<K,K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$33,
scala.reflect.ClassTag<K> evidence$34,
TypeInformation<VV> evidence$35,
scala.reflect.ClassTag<VV> evidence$36,
TypeInformation<EV> evidence$37,
scala.reflect.ClassTag<EV> evidence$38)
Creates a graph from DataSets of tuples for vertices and for edges.
|
<K,VV,EV> Graph<K,VV,EV> |
Graph$.fromTupleDataSet(DataSet<scala.Tuple2<K,VV>> vertices,
DataSet<scala.Tuple3<K,K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$33,
scala.reflect.ClassTag<K> evidence$34,
TypeInformation<VV> evidence$35,
scala.reflect.ClassTag<VV> evidence$36,
TypeInformation<EV> evidence$37,
scala.reflect.ClassTag<EV> evidence$38)
Creates a graph from DataSets of tuples for vertices and for edges.
|
static <K,VV,EV> Graph<K,VV,EV> |
Graph.fromTupleDataSet(DataSet<scala.Tuple2<K,VV>> vertices,
DataSet<scala.Tuple3<K,K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$33,
scala.reflect.ClassTag<K> evidence$34,
TypeInformation<VV> evidence$35,
scala.reflect.ClassTag<VV> evidence$36,
TypeInformation<EV> evidence$37,
scala.reflect.ClassTag<EV> evidence$38)
Creates a graph from DataSets of tuples for vertices and for edges.
|
static <K,VV,EV> Graph<K,VV,EV> |
Graph.fromTupleDataSet(DataSet<scala.Tuple2<K,VV>> vertices,
DataSet<scala.Tuple3<K,K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$33,
scala.reflect.ClassTag<K> evidence$34,
TypeInformation<VV> evidence$35,
scala.reflect.ClassTag<VV> evidence$36,
TypeInformation<EV> evidence$37,
scala.reflect.ClassTag<EV> evidence$38)
Creates a graph from DataSets of tuples for vertices and for edges.
|
<K,EV> Graph<K,NullValue,EV> |
Graph$.fromTupleDataSet(DataSet<scala.Tuple3<K,K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$39,
scala.reflect.ClassTag<K> evidence$40,
TypeInformation<EV> evidence$41,
scala.reflect.ClassTag<EV> evidence$42)
Creates a Graph from a DataSet of Tuples representing the edges.
|
static <K,EV> Graph<K,NullValue,EV> |
Graph.fromTupleDataSet(DataSet<scala.Tuple3<K,K,EV>> edges,
ExecutionEnvironment env,
TypeInformation<K> evidence$39,
scala.reflect.ClassTag<K> evidence$40,
TypeInformation<EV> evidence$41,
scala.reflect.ClassTag<EV> evidence$42)
Creates a Graph from a DataSet of Tuples representing the edges.
|
<K,VV,EV> Graph<K,VV,EV> |
Graph$.fromTupleDataSet(DataSet<scala.Tuple3<K,K,EV>> edges,
MapFunction<K,VV> vertexValueInitializer,
ExecutionEnvironment env,
TypeInformation<K> evidence$43,
scala.reflect.ClassTag<K> evidence$44,
TypeInformation<VV> evidence$45,
scala.reflect.ClassTag<VV> evidence$46,
TypeInformation<EV> evidence$47,
scala.reflect.ClassTag<EV> evidence$48)
Creates a Graph from a DataSet of Tuples representing the edges.
|
static <K,VV,EV> Graph<K,VV,EV> |
Graph.fromTupleDataSet(DataSet<scala.Tuple3<K,K,EV>> edges,
MapFunction<K,VV> vertexValueInitializer,
ExecutionEnvironment env,
TypeInformation<K> evidence$43,
scala.reflect.ClassTag<K> evidence$44,
TypeInformation<VV> evidence$45,
scala.reflect.ClassTag<VV> evidence$46,
TypeInformation<EV> evidence$47,
scala.reflect.ClassTag<EV> evidence$48)
Creates a Graph from a DataSet of Tuples representing the edges.
|
<T> Graph<K,VV,EV> |
Graph.joinWithEdges(DataSet<scala.Tuple3<K,K,T>> inputDataSet,
EdgeJoinFunction<EV,T> edgeJoinFunction,
TypeInformation<T> evidence$89)
Joins the edge DataSet with an input DataSet on the composite key of both
source and target IDs and applies a user-defined transformation on the values
of the matched records.
|
<T> Graph<K,VV,EV> |
Graph.joinWithEdges(DataSet<scala.Tuple3<K,K,T>> inputDataSet,
scala.Function2<EV,T,EV> fun,
TypeInformation<T> evidence$90)
Joins the edge DataSet with an input DataSet on the composite key of both
source and target IDs and applies a user-defined transformation on the values
of the matched records.
|
<T> Graph<K,VV,EV> |
Graph.joinWithEdgesOnSource(DataSet<scala.Tuple2<K,T>> inputDataSet,
EdgeJoinFunction<EV,T> edgeJoinFunction,
TypeInformation<T> evidence$91)
Joins the edge DataSet with an input Tuple2 DataSet and applies a user-defined transformation
on the values of the matched records.
|
<T> Graph<K,VV,EV> |
Graph.joinWithEdgesOnSource(DataSet<scala.Tuple2<K,T>> inputDataSet,
scala.Function2<EV,T,EV> fun,
TypeInformation<T> evidence$92)
Joins the edge DataSet with an input Tuple2 DataSet and applies a user-defined transformation
on the values of the matched records.
|
<T> Graph<K,VV,EV> |
Graph.joinWithEdgesOnTarget(DataSet<scala.Tuple2<K,T>> inputDataSet,
EdgeJoinFunction<EV,T> edgeJoinFunction,
TypeInformation<T> evidence$93)
Joins the edge DataSet with an input Tuple2 DataSet and applies a user-defined transformation
on the values of the matched records.
|
<T> Graph<K,VV,EV> |
Graph.joinWithEdgesOnTarget(DataSet<scala.Tuple2<K,T>> inputDataSet,
scala.Function2<EV,T,EV> fun,
TypeInformation<T> evidence$94)
Joins the edge DataSet with an input Tuple2 DataSet and applies a user-defined transformation
on the values of the matched records.
|
<T> Graph<K,VV,EV> |
Graph.joinWithVertices(DataSet<scala.Tuple2<K,T>> inputDataSet,
scala.Function2<VV,T,VV> fun,
TypeInformation<T> evidence$88)
Joins the vertex DataSet of this graph with an input Tuple2 DataSet and applies
a user-defined transformation on the values of the matched records.
|
<T> Graph<K,VV,EV> |
Graph.joinWithVertices(DataSet<scala.Tuple2<K,T>> inputDataSet,
VertexJoinFunction<VV,T> vertexJoinFunction,
TypeInformation<T> evidence$87)
Joins the vertex DataSet of this graph with an input Tuple2 DataSet and applies
a user-defined transformation on the values of the matched records.
|
Modifier and Type | Method and Description |
---|---|
static DataSet<LabeledVector> |
MLUtils.readLibSVM(ExecutionEnvironment env,
String filePath)
Reads a file in libSVM/SVMLight format and converts the data into a data set of
LabeledVector . |
DataSet<LabeledVector> |
MLUtils$.readLibSVM(ExecutionEnvironment env,
String filePath)
Reads a file in libSVM/SVMLight format and converts the data into a data set of
LabeledVector . |
Modifier and Type | Method and Description |
---|---|
static DataSink<String> |
MLUtils.writeLibSVM(String filePath,
DataSet<LabeledVector> labeledVectors)
Writes a
DataSet of LabeledVector to a file using the libSVM/SVMLight format. |
DataSink<String> |
MLUtils$.writeLibSVM(String filePath,
DataSet<LabeledVector> labeledVectors)
Writes a
DataSet of LabeledVector to a file using the libSVM/SVMLight format. |
Modifier and Type | Method and Description |
---|---|
scala.Option<DataSet<DenseVector>> |
SVM.weightsOption()
Stores the learned weight vector after the fit operation
|
Modifier and Type | Method and Description |
---|---|
static <T> DataSet<Block<T>> |
FlinkMLTools.block(DataSet<T> input,
int numBlocks,
scala.Option<Partitioner<Object>> partitionerOption,
TypeInformation<T> evidence$31,
scala.reflect.ClassTag<T> evidence$32)
Groups the DataSet input into numBlocks blocks.
|
<T> DataSet<Block<T>> |
FlinkMLTools$.block(DataSet<T> input,
int numBlocks,
scala.Option<Partitioner<Object>> partitionerOption,
TypeInformation<T> evidence$31,
scala.reflect.ClassTag<T> evidence$32)
Groups the DataSet input into numBlocks blocks.
|
static <T> DataSet<T> |
FlinkMLTools.persist(DataSet<T> dataset,
String path,
scala.reflect.ClassTag<T> evidence$1,
TypeInformation<T> evidence$2)
Writes a
DataSet to the specified path and returns it as a DataSource for subsequent
operations. |
<T> DataSet<T> |
FlinkMLTools$.persist(DataSet<T> dataset,
String path,
scala.reflect.ClassTag<T> evidence$1,
TypeInformation<T> evidence$2)
Writes a
DataSet to the specified path and returns it as a DataSource for subsequent
operations. |
Modifier and Type | Method and Description |
---|---|
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B> scala.Tuple2<DataSet<A>,DataSet<B>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
String path1,
String path2,
scala.reflect.ClassTag<A> evidence$3,
TypeInformation<A> evidence$4,
scala.reflect.ClassTag<B> evidence$5,
TypeInformation<B> evidence$6)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B> scala.Tuple2<DataSet<A>,DataSet<B>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
String path1,
String path2,
scala.reflect.ClassTag<A> evidence$3,
TypeInformation<A> evidence$4,
scala.reflect.ClassTag<B> evidence$5,
TypeInformation<B> evidence$6)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B> scala.Tuple2<DataSet<A>,DataSet<B>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
String path1,
String path2,
scala.reflect.ClassTag<A> evidence$3,
TypeInformation<A> evidence$4,
scala.reflect.ClassTag<B> evidence$5,
TypeInformation<B> evidence$6)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B> scala.Tuple2<DataSet<A>,DataSet<B>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
String path1,
String path2,
scala.reflect.ClassTag<A> evidence$3,
TypeInformation<A> evidence$4,
scala.reflect.ClassTag<B> evidence$5,
TypeInformation<B> evidence$6)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
Modifier and Type | Method and Description |
---|---|
static <T> DataSet<Block<T>> |
FlinkMLTools.block(DataSet<T> input,
int numBlocks,
scala.Option<Partitioner<Object>> partitionerOption,
TypeInformation<T> evidence$31,
scala.reflect.ClassTag<T> evidence$32)
Groups the DataSet input into numBlocks blocks.
|
<T> DataSet<Block<T>> |
FlinkMLTools$.block(DataSet<T> input,
int numBlocks,
scala.Option<Partitioner<Object>> partitionerOption,
TypeInformation<T> evidence$31,
scala.reflect.ClassTag<T> evidence$32)
Groups the DataSet input into numBlocks blocks.
|
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D,E> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D,E> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
DataSet<E> ds5,
String path1,
String path2,
String path3,
String path4,
String path5,
scala.reflect.ClassTag<A> evidence$21,
TypeInformation<A> evidence$22,
scala.reflect.ClassTag<B> evidence$23,
TypeInformation<B> evidence$24,
scala.reflect.ClassTag<C> evidence$25,
TypeInformation<C> evidence$26,
scala.reflect.ClassTag<D> evidence$27,
TypeInformation<D> evidence$28,
scala.reflect.ClassTag<E> evidence$29,
TypeInformation<E> evidence$30)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C,D> scala.Tuple4<DataSet<A>,DataSet<B>,DataSet<C>,DataSet<D>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
DataSet<D> ds4,
String path1,
String path2,
String path3,
String path4,
scala.reflect.ClassTag<A> evidence$13,
TypeInformation<A> evidence$14,
scala.reflect.ClassTag<B> evidence$15,
TypeInformation<B> evidence$16,
scala.reflect.ClassTag<C> evidence$17,
TypeInformation<C> evidence$18,
scala.reflect.ClassTag<D> evidence$19,
TypeInformation<D> evidence$20)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B,C> scala.Tuple3<DataSet<A>,DataSet<B>,DataSet<C>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
DataSet<C> ds3,
String path1,
String path2,
String path3,
scala.reflect.ClassTag<A> evidence$7,
TypeInformation<A> evidence$8,
scala.reflect.ClassTag<B> evidence$9,
TypeInformation<B> evidence$10,
scala.reflect.ClassTag<C> evidence$11,
TypeInformation<C> evidence$12)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B> scala.Tuple2<DataSet<A>,DataSet<B>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
String path1,
String path2,
scala.reflect.ClassTag<A> evidence$3,
TypeInformation<A> evidence$4,
scala.reflect.ClassTag<B> evidence$5,
TypeInformation<B> evidence$6)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <A,B> scala.Tuple2<DataSet<A>,DataSet<B>> |
FlinkMLTools.persist(DataSet<A> ds1,
DataSet<B> ds2,
String path1,
String path2,
scala.reflect.ClassTag<A> evidence$3,
TypeInformation<A> evidence$4,
scala.reflect.ClassTag<B> evidence$5,
TypeInformation<B> evidence$6)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B> scala.Tuple2<DataSet<A>,DataSet<B>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
String path1,
String path2,
scala.reflect.ClassTag<A> evidence$3,
TypeInformation<A> evidence$4,
scala.reflect.ClassTag<B> evidence$5,
TypeInformation<B> evidence$6)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
<A,B> scala.Tuple2<DataSet<A>,DataSet<B>> |
FlinkMLTools$.persist(DataSet<A> ds1,
DataSet<B> ds2,
String path1,
String path2,
scala.reflect.ClassTag<A> evidence$3,
TypeInformation<A> evidence$4,
scala.reflect.ClassTag<B> evidence$5,
TypeInformation<B> evidence$6)
Writes multiple
DataSet s to the specified paths and returns them as DataSources for
subsequent operations. |
static <T> DataSet<T> |
FlinkMLTools.persist(DataSet<T> dataset,
String path,
scala.reflect.ClassTag<T> evidence$1,
TypeInformation<T> evidence$2)
Writes a
DataSet to the specified path and returns it as a DataSource for subsequent
operations. |
<T> DataSet<T> |
FlinkMLTools$.persist(DataSet<T> dataset,
String path,
scala.reflect.ClassTag<T> evidence$1,
TypeInformation<T> evidence$2)
Writes a
DataSet to the specified path and returns it as a DataSource for subsequent
operations. |
Modifier and Type | Method and Description |
---|---|
DataSet<IndexedRow> |
DistributedRowMatrix.data() |
Modifier and Type | Method and Description |
---|---|
DistributedRowMatrix |
DistributedRowMatrix$.fromCOO(DataSet<scala.Tuple3<Object,Object,Object>> data,
int numRows,
int numCols,
boolean isSorted)
Builds a
DistributedRowMatrix from a DataSet in COO. |
static DistributedRowMatrix |
DistributedRowMatrix.fromCOO(DataSet<scala.Tuple3<Object,Object,Object>> data,
int numRows,
int numCols,
boolean isSorted)
Builds a
DistributedRowMatrix from a DataSet in COO. |
Constructor and Description |
---|
DistributedRowMatrix(DataSet<IndexedRow> data,
int numRows,
int numCols) |
Modifier and Type | Method and Description |
---|---|
scala.Option<DataSet<Block<Vector>>> |
KNN.trainingSet() |
Modifier and Type | Method and Description |
---|---|
DataSet<WeightVector> |
Solver.createInitialWeightsDS(scala.Option<DataSet<WeightVector>> initialWeights,
DataSet<LabeledVector> data)
Creates initial weights vector, creating a DataSet with a WeightVector element
|
DataSet<WeightVector> |
Solver.createInitialWeightVector(DataSet<Object> dimensionDS)
Creates a DataSet with one zero vector.
|
abstract DataSet<WeightVector> |
Solver.optimize(DataSet<LabeledVector> data,
scala.Option<DataSet<WeightVector>> initialWeights)
Provides a solution for the given optimization problem
|
DataSet<WeightVector> |
GradientDescent.optimize(DataSet<LabeledVector> data,
scala.Option<DataSet<WeightVector>> initialWeights)
Provides a solution for the given optimization problem
|
DataSet<WeightVector> |
GradientDescent.optimizeWithConvergenceCriterion(DataSet<LabeledVector> dataPoints,
DataSet<WeightVector> initialWeightsDS,
int numberOfIterations,
double regularizationConstant,
double learningRate,
double convergenceThreshold,
LossFunction lossFunction,
LearningRateMethod.LearningRateMethodTrait learningRateMethod) |
DataSet<WeightVector> |
GradientDescent.optimizeWithoutConvergenceCriterion(DataSet<LabeledVector> data,
DataSet<WeightVector> initialWeightsDS,
int numberOfIterations,
double regularizationConstant,
double learningRate,
LossFunction lossFunction,
LearningRateMethod.LearningRateMethodTrait optimizationMethod) |
Modifier and Type | Method and Description |
---|---|
DataSet<WeightVector> |
Solver.createInitialWeightsDS(scala.Option<DataSet<WeightVector>> initialWeights,
DataSet<LabeledVector> data)
Creates initial weights vector, creating a DataSet with a WeightVector element
|
DataSet<WeightVector> |
Solver.createInitialWeightVector(DataSet<Object> dimensionDS)
Creates a DataSet with one zero vector.
|
abstract DataSet<WeightVector> |
Solver.optimize(DataSet<LabeledVector> data,
scala.Option<DataSet<WeightVector>> initialWeights)
Provides a solution for the given optimization problem
|
DataSet<WeightVector> |
GradientDescent.optimize(DataSet<LabeledVector> data,
scala.Option<DataSet<WeightVector>> initialWeights)
Provides a solution for the given optimization problem
|
DataSet<WeightVector> |
GradientDescent.optimizeWithConvergenceCriterion(DataSet<LabeledVector> dataPoints,
DataSet<WeightVector> initialWeightsDS,
int numberOfIterations,
double regularizationConstant,
double learningRate,
double convergenceThreshold,
LossFunction lossFunction,
LearningRateMethod.LearningRateMethodTrait learningRateMethod) |
DataSet<WeightVector> |
GradientDescent.optimizeWithConvergenceCriterion(DataSet<LabeledVector> dataPoints,
DataSet<WeightVector> initialWeightsDS,
int numberOfIterations,
double regularizationConstant,
double learningRate,
double convergenceThreshold,
LossFunction lossFunction,
LearningRateMethod.LearningRateMethodTrait learningRateMethod) |
DataSet<WeightVector> |
GradientDescent.optimizeWithoutConvergenceCriterion(DataSet<LabeledVector> data,
DataSet<WeightVector> initialWeightsDS,
int numberOfIterations,
double regularizationConstant,
double learningRate,
LossFunction lossFunction,
LearningRateMethod.LearningRateMethodTrait optimizationMethod) |
DataSet<WeightVector> |
GradientDescent.optimizeWithoutConvergenceCriterion(DataSet<LabeledVector> data,
DataSet<WeightVector> initialWeightsDS,
int numberOfIterations,
double regularizationConstant,
double learningRate,
LossFunction lossFunction,
LearningRateMethod.LearningRateMethodTrait optimizationMethod) |
Modifier and Type | Method and Description |
---|---|
DataSet<WeightVector> |
Solver.createInitialWeightsDS(scala.Option<DataSet<WeightVector>> initialWeights,
DataSet<LabeledVector> data)
Creates initial weights vector, creating a DataSet with a WeightVector element
|
abstract DataSet<WeightVector> |
Solver.optimize(DataSet<LabeledVector> data,
scala.Option<DataSet<WeightVector>> initialWeights)
Provides a solution for the given optimization problem
|
DataSet<WeightVector> |
GradientDescent.optimize(DataSet<LabeledVector> data,
scala.Option<DataSet<WeightVector>> initialWeights)
Provides a solution for the given optimization problem
|
Modifier and Type | Method and Description |
---|---|
<Testing,PredictionValue> |
Predictor.evaluate(DataSet<Testing> testing,
ParameterMap evaluateParameters,
EvaluateDataSetOperation<Self,Testing,PredictionValue> evaluator)
Evaluates the testing data by computing the prediction value and returning a pair of true
label value and prediction value.
|
DataSet<scala.Tuple2<Prediction,Prediction>> |
EvaluateDataSetOperation.evaluateDataSet(Instance instance,
ParameterMap evaluateParameters,
DataSet<Testing> testing) |
DataSet<Model> |
TransformOperation.getModel(Instance instance,
ParameterMap transformParemters)
Retrieves the model of the
Transformer for which this operation has been defined. |
DataSet<Model> |
PredictOperation.getModel(Instance instance,
ParameterMap predictParameters)
Defines how to retrieve the model of the type for which this operation was defined
|
<Testing,Prediction> |
Predictor.predict(DataSet<Testing> testing,
ParameterMap predictParameters,
PredictDataSetOperation<Self,Testing,Prediction> predictor)
Predict testing data according the learned model.
|
DataSet<Prediction> |
PredictDataSetOperation.predictDataSet(Self instance,
ParameterMap predictParameters,
DataSet<Testing> input)
Calculates the predictions for all elements in the
DataSet input |
<Input,Output> |
Transformer.transform(DataSet<Input> input,
ParameterMap transformParameters,
TransformDataSetOperation<Self,Input,Output> transformOperation)
Transform operation which transforms an input
DataSet of type I into an ouptut DataSet
of type O. |
DataSet<Output> |
TransformDataSetOperation.transformDataSet(Instance instance,
ParameterMap transformParameters,
DataSet<Input> input) |
Modifier and Type | Method and Description |
---|---|
<Testing,PredictionValue> |
Predictor.evaluate(DataSet<Testing> testing,
ParameterMap evaluateParameters,
EvaluateDataSetOperation<Self,Testing,PredictionValue> evaluator)
Evaluates the testing data by computing the prediction value and returning a pair of true
label value and prediction value.
|
DataSet<scala.Tuple2<Prediction,Prediction>> |
EvaluateDataSetOperation.evaluateDataSet(Instance instance,
ParameterMap evaluateParameters,
DataSet<Testing> testing) |
<Training> void |
Estimator.fit(DataSet<Training> training,
ParameterMap fitParameters,
FitOperation<Self,Training> fitOperation)
Fits the estimator to the given input data.
|
void |
FitOperation.fit(Self instance,
ParameterMap fitParameters,
DataSet<Training> input) |
<Testing,Prediction> |
Predictor.predict(DataSet<Testing> testing,
ParameterMap predictParameters,
PredictDataSetOperation<Self,Testing,Prediction> predictor)
Predict testing data according the learned model.
|
DataSet<Prediction> |
PredictDataSetOperation.predictDataSet(Self instance,
ParameterMap predictParameters,
DataSet<Testing> input)
Calculates the predictions for all elements in the
DataSet input |
<Input,Output> |
Transformer.transform(DataSet<Input> input,
ParameterMap transformParameters,
TransformDataSetOperation<Self,Input,Output> transformOperation)
Transform operation which transforms an input
DataSet of type I into an ouptut DataSet
of type O. |
DataSet<Output> |
TransformDataSetOperation.transformDataSet(Instance instance,
ParameterMap transformParameters,
DataSet<Input> input) |
Modifier and Type | Method and Description |
---|---|
DataSet<scala.Tuple2<breeze.linalg.Vector<Object>,breeze.linalg.Vector<Object>>> |
StandardScaler.StandardScalerTransformOperation.getModel(StandardScaler instance,
ParameterMap transformParameters) |
DataSet<T> |
Splitter.TrainTestHoldoutDataSet.holdout() |
<T> DataSet<T>[] |
Splitter$.multiRandomSplit(DataSet<T> input,
double[] fracArray,
long seed,
TypeInformation<T> evidence$7,
scala.reflect.ClassTag<T> evidence$8)
Split a DataSet by the probability fraction of each element of a vector.
|
static <T> DataSet<T>[] |
Splitter.multiRandomSplit(DataSet<T> input,
double[] fracArray,
long seed,
TypeInformation<T> evidence$7,
scala.reflect.ClassTag<T> evidence$8)
Split a DataSet by the probability fraction of each element of a vector.
|
<T> DataSet<T>[] |
Splitter$.randomSplit(DataSet<T> input,
double fraction,
boolean precise,
long seed,
TypeInformation<T> evidence$5,
scala.reflect.ClassTag<T> evidence$6)
Split a DataSet by the probability fraction of each element.
|
static <T> DataSet<T>[] |
Splitter.randomSplit(DataSet<T> input,
double fraction,
boolean precise,
long seed,
TypeInformation<T> evidence$5,
scala.reflect.ClassTag<T> evidence$6)
Split a DataSet by the probability fraction of each element.
|
DataSet<T> |
Splitter.TrainTestDataSet.testing() |
DataSet<T> |
Splitter.TrainTestHoldoutDataSet.testing() |
DataSet<T> |
Splitter.TrainTestDataSet.training() |
DataSet<T> |
Splitter.TrainTestHoldoutDataSet.training() |
Modifier and Type | Method and Description |
---|---|
scala.Option<DataSet<scala.Tuple2<breeze.linalg.Vector<Object>,breeze.linalg.Vector<Object>>>> |
MinMaxScaler.metricsOption() |
scala.Option<DataSet<scala.Tuple2<breeze.linalg.Vector<Object>,breeze.linalg.Vector<Object>>>> |
StandardScaler.metricsOption() |
Modifier and Type | Method and Description |
---|---|
<T> Splitter.TrainTestDataSet<T>[] |
Splitter$.kFoldSplit(DataSet<T> input,
int kFolds,
long seed,
TypeInformation<T> evidence$9,
scala.reflect.ClassTag<T> evidence$10)
Split a DataSet into an array of TrainTest DataSets
|
static <T> Splitter.TrainTestDataSet<T>[] |
Splitter.kFoldSplit(DataSet<T> input,
int kFolds,
long seed,
TypeInformation<T> evidence$9,
scala.reflect.ClassTag<T> evidence$10)
Split a DataSet into an array of TrainTest DataSets
|
<T> DataSet<T>[] |
Splitter$.multiRandomSplit(DataSet<T> input,
double[] fracArray,
long seed,
TypeInformation<T> evidence$7,
scala.reflect.ClassTag<T> evidence$8)
Split a DataSet by the probability fraction of each element of a vector.
|
static <T> DataSet<T>[] |
Splitter.multiRandomSplit(DataSet<T> input,
double[] fracArray,
long seed,
TypeInformation<T> evidence$7,
scala.reflect.ClassTag<T> evidence$8)
Split a DataSet by the probability fraction of each element of a vector.
|
<T> DataSet<T>[] |
Splitter$.randomSplit(DataSet<T> input,
double fraction,
boolean precise,
long seed,
TypeInformation<T> evidence$5,
scala.reflect.ClassTag<T> evidence$6)
Split a DataSet by the probability fraction of each element.
|
static <T> DataSet<T>[] |
Splitter.randomSplit(DataSet<T> input,
double fraction,
boolean precise,
long seed,
TypeInformation<T> evidence$5,
scala.reflect.ClassTag<T> evidence$6)
Split a DataSet by the probability fraction of each element.
|
<T> Splitter.TrainTestHoldoutDataSet<T> |
Splitter$.trainTestHoldoutSplit(DataSet<T> input,
scala.Tuple3<Object,Object,Object> fracTuple,
long seed,
TypeInformation<T> evidence$13,
scala.reflect.ClassTag<T> evidence$14)
A wrapper for multiRandomSplit that yields a TrainTestHoldoutDataSet
|
static <T> Splitter.TrainTestHoldoutDataSet<T> |
Splitter.trainTestHoldoutSplit(DataSet<T> input,
scala.Tuple3<Object,Object,Object> fracTuple,
long seed,
TypeInformation<T> evidence$13,
scala.reflect.ClassTag<T> evidence$14)
A wrapper for multiRandomSplit that yields a TrainTestHoldoutDataSet
|
<T> Splitter.TrainTestDataSet<T> |
Splitter$.trainTestSplit(DataSet<T> input,
double fraction,
boolean precise,
long seed,
TypeInformation<T> evidence$11,
scala.reflect.ClassTag<T> evidence$12)
A wrapper for randomSplit that yields a TrainTestDataSet
|
static <T> Splitter.TrainTestDataSet<T> |
Splitter.trainTestSplit(DataSet<T> input,
double fraction,
boolean precise,
long seed,
TypeInformation<T> evidence$11,
scala.reflect.ClassTag<T> evidence$12)
A wrapper for randomSplit that yields a TrainTestDataSet
|
Constructor and Description |
---|
TrainTestDataSet(DataSet<T> training,
DataSet<T> testing,
TypeInformation<T> evidence$1,
scala.reflect.ClassTag<T> evidence$2) |
TrainTestDataSet(DataSet<T> training,
DataSet<T> testing,
TypeInformation<T> evidence$1,
scala.reflect.ClassTag<T> evidence$2) |
TrainTestHoldoutDataSet(DataSet<T> training,
DataSet<T> testing,
DataSet<T> holdout,
TypeInformation<T> evidence$3,
scala.reflect.ClassTag<T> evidence$4) |
TrainTestHoldoutDataSet(DataSet<T> training,
DataSet<T> testing,
DataSet<T> holdout,
TypeInformation<T> evidence$3,
scala.reflect.ClassTag<T> evidence$4) |
TrainTestHoldoutDataSet(DataSet<T> training,
DataSet<T> testing,
DataSet<T> holdout,
TypeInformation<T> evidence$3,
scala.reflect.ClassTag<T> evidence$4) |
Modifier and Type | Method and Description |
---|---|
static DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> |
ALS.createInBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
ALS.BlockIDGenerator blockIDGenerator)
Creates the incoming block information
|
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> |
ALS$.createInBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
ALS.BlockIDGenerator blockIDGenerator)
Creates the incoming block information
|
static DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> |
ALS.createOutBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
int itemBlocks,
ALS.BlockIDGenerator blockIDGenerator)
Creates the outgoing block information
|
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> |
ALS$.createOutBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
int itemBlocks,
ALS.BlockIDGenerator blockIDGenerator)
Creates the outgoing block information
|
static DataSet<scala.Tuple2<Object,int[]>> |
ALS.createUsersPerBlock(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings)
Calculates the userIDs in ascending order of each user block
|
DataSet<scala.Tuple2<Object,int[]>> |
ALS$.createUsersPerBlock(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings)
Calculates the userIDs in ascending order of each user block
|
DataSet<Object> |
ALS.empiricalRisk(DataSet<scala.Tuple3<Object,Object,Object>> labeledData,
ParameterMap riskParameters)
Empirical risk of the trained model (matrix factorization).
|
static DataSet<ALS.Factors> |
ALS.generateRandomMatrix(DataSet<Object> users,
int factors,
long seed) |
DataSet<ALS.Factors> |
ALS$.generateRandomMatrix(DataSet<Object> users,
int factors,
long seed) |
DataSet<ALS.Factors> |
ALS.Factorization.itemFactors() |
DataSet<scala.Tuple2<Object,double[][]>> |
ALS.BlockedFactorization.itemFactors() |
static DataSet<ALS.Factors> |
ALS.unblock(DataSet<scala.Tuple2<Object,double[][]>> users,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> outInfo,
ALS.BlockIDPartitioner blockIDPartitioner)
Unblocks the blocked user and item matrix representation so that it is at DataSet of
column vectors.
|
DataSet<ALS.Factors> |
ALS$.unblock(DataSet<scala.Tuple2<Object,double[][]>> users,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> outInfo,
ALS.BlockIDPartitioner blockIDPartitioner)
Unblocks the blocked user and item matrix representation so that it is at DataSet of
column vectors.
|
static DataSet<scala.Tuple2<Object,double[][]>> |
ALS.updateFactors(int numUserBlocks,
DataSet<scala.Tuple2<Object,double[][]>> items,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> itemOut,
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> userIn,
int factors,
double lambda,
Partitioner<Object> blockIDPartitioner)
Calculates a single half step of the ALS optimization.
|
DataSet<scala.Tuple2<Object,double[][]>> |
ALS$.updateFactors(int numUserBlocks,
DataSet<scala.Tuple2<Object,double[][]>> items,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> itemOut,
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> userIn,
int factors,
double lambda,
Partitioner<Object> blockIDPartitioner)
Calculates a single half step of the ALS optimization.
|
DataSet<ALS.Factors> |
ALS.Factorization.userFactors() |
DataSet<scala.Tuple2<Object,double[][]>> |
ALS.BlockedFactorization.userFactors() |
Modifier and Type | Method and Description |
---|---|
static scala.Tuple2<DataSet<scala.Tuple2<Object,ALS.InBlockInformation>>,DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>>> |
ALS.createBlockInformation(int userBlocks,
int itemBlocks,
DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
ALS.BlockIDPartitioner blockIDPartitioner)
Creates the meta information needed to route the item and user vectors to the respective user
and item blocks.
|
static scala.Tuple2<DataSet<scala.Tuple2<Object,ALS.InBlockInformation>>,DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>>> |
ALS.createBlockInformation(int userBlocks,
int itemBlocks,
DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
ALS.BlockIDPartitioner blockIDPartitioner)
Creates the meta information needed to route the item and user vectors to the respective user
and item blocks.
|
scala.Tuple2<DataSet<scala.Tuple2<Object,ALS.InBlockInformation>>,DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>>> |
ALS$.createBlockInformation(int userBlocks,
int itemBlocks,
DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
ALS.BlockIDPartitioner blockIDPartitioner)
Creates the meta information needed to route the item and user vectors to the respective user
and item blocks.
|
scala.Tuple2<DataSet<scala.Tuple2<Object,ALS.InBlockInformation>>,DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>>> |
ALS$.createBlockInformation(int userBlocks,
int itemBlocks,
DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
ALS.BlockIDPartitioner blockIDPartitioner)
Creates the meta information needed to route the item and user vectors to the respective user
and item blocks.
|
scala.Option<scala.Tuple2<DataSet<ALS.Factors>,DataSet<ALS.Factors>>> |
ALS.factorsOption() |
scala.Option<scala.Tuple2<DataSet<ALS.Factors>,DataSet<ALS.Factors>>> |
ALS.factorsOption() |
Modifier and Type | Method and Description |
---|---|
static scala.Tuple2<DataSet<scala.Tuple2<Object,ALS.InBlockInformation>>,DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>>> |
ALS.createBlockInformation(int userBlocks,
int itemBlocks,
DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
ALS.BlockIDPartitioner blockIDPartitioner)
Creates the meta information needed to route the item and user vectors to the respective user
and item blocks.
|
scala.Tuple2<DataSet<scala.Tuple2<Object,ALS.InBlockInformation>>,DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>>> |
ALS$.createBlockInformation(int userBlocks,
int itemBlocks,
DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
ALS.BlockIDPartitioner blockIDPartitioner)
Creates the meta information needed to route the item and user vectors to the respective user
and item blocks.
|
static DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> |
ALS.createInBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
ALS.BlockIDGenerator blockIDGenerator)
Creates the incoming block information
|
static DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> |
ALS.createInBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
ALS.BlockIDGenerator blockIDGenerator)
Creates the incoming block information
|
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> |
ALS$.createInBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
ALS.BlockIDGenerator blockIDGenerator)
Creates the incoming block information
|
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> |
ALS$.createInBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
ALS.BlockIDGenerator blockIDGenerator)
Creates the incoming block information
|
static DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> |
ALS.createOutBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
int itemBlocks,
ALS.BlockIDGenerator blockIDGenerator)
Creates the outgoing block information
|
static DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> |
ALS.createOutBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
int itemBlocks,
ALS.BlockIDGenerator blockIDGenerator)
Creates the outgoing block information
|
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> |
ALS$.createOutBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
int itemBlocks,
ALS.BlockIDGenerator blockIDGenerator)
Creates the outgoing block information
|
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> |
ALS$.createOutBlockInformation(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings,
DataSet<scala.Tuple2<Object,int[]>> usersPerBlock,
int itemBlocks,
ALS.BlockIDGenerator blockIDGenerator)
Creates the outgoing block information
|
static DataSet<scala.Tuple2<Object,int[]>> |
ALS.createUsersPerBlock(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings)
Calculates the userIDs in ascending order of each user block
|
DataSet<scala.Tuple2<Object,int[]>> |
ALS$.createUsersPerBlock(DataSet<scala.Tuple2<Object,ALS.Rating>> ratings)
Calculates the userIDs in ascending order of each user block
|
DataSet<Object> |
ALS.empiricalRisk(DataSet<scala.Tuple3<Object,Object,Object>> labeledData,
ParameterMap riskParameters)
Empirical risk of the trained model (matrix factorization).
|
static DataSet<ALS.Factors> |
ALS.generateRandomMatrix(DataSet<Object> users,
int factors,
long seed) |
DataSet<ALS.Factors> |
ALS$.generateRandomMatrix(DataSet<Object> users,
int factors,
long seed) |
static DataSet<ALS.Factors> |
ALS.unblock(DataSet<scala.Tuple2<Object,double[][]>> users,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> outInfo,
ALS.BlockIDPartitioner blockIDPartitioner)
Unblocks the blocked user and item matrix representation so that it is at DataSet of
column vectors.
|
static DataSet<ALS.Factors> |
ALS.unblock(DataSet<scala.Tuple2<Object,double[][]>> users,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> outInfo,
ALS.BlockIDPartitioner blockIDPartitioner)
Unblocks the blocked user and item matrix representation so that it is at DataSet of
column vectors.
|
DataSet<ALS.Factors> |
ALS$.unblock(DataSet<scala.Tuple2<Object,double[][]>> users,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> outInfo,
ALS.BlockIDPartitioner blockIDPartitioner)
Unblocks the blocked user and item matrix representation so that it is at DataSet of
column vectors.
|
DataSet<ALS.Factors> |
ALS$.unblock(DataSet<scala.Tuple2<Object,double[][]>> users,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> outInfo,
ALS.BlockIDPartitioner blockIDPartitioner)
Unblocks the blocked user and item matrix representation so that it is at DataSet of
column vectors.
|
static DataSet<scala.Tuple2<Object,double[][]>> |
ALS.updateFactors(int numUserBlocks,
DataSet<scala.Tuple2<Object,double[][]>> items,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> itemOut,
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> userIn,
int factors,
double lambda,
Partitioner<Object> blockIDPartitioner)
Calculates a single half step of the ALS optimization.
|
static DataSet<scala.Tuple2<Object,double[][]>> |
ALS.updateFactors(int numUserBlocks,
DataSet<scala.Tuple2<Object,double[][]>> items,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> itemOut,
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> userIn,
int factors,
double lambda,
Partitioner<Object> blockIDPartitioner)
Calculates a single half step of the ALS optimization.
|
static DataSet<scala.Tuple2<Object,double[][]>> |
ALS.updateFactors(int numUserBlocks,
DataSet<scala.Tuple2<Object,double[][]>> items,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> itemOut,
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> userIn,
int factors,
double lambda,
Partitioner<Object> blockIDPartitioner)
Calculates a single half step of the ALS optimization.
|
DataSet<scala.Tuple2<Object,double[][]>> |
ALS$.updateFactors(int numUserBlocks,
DataSet<scala.Tuple2<Object,double[][]>> items,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> itemOut,
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> userIn,
int factors,
double lambda,
Partitioner<Object> blockIDPartitioner)
Calculates a single half step of the ALS optimization.
|
DataSet<scala.Tuple2<Object,double[][]>> |
ALS$.updateFactors(int numUserBlocks,
DataSet<scala.Tuple2<Object,double[][]>> items,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> itemOut,
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> userIn,
int factors,
double lambda,
Partitioner<Object> blockIDPartitioner)
Calculates a single half step of the ALS optimization.
|
DataSet<scala.Tuple2<Object,double[][]>> |
ALS$.updateFactors(int numUserBlocks,
DataSet<scala.Tuple2<Object,double[][]>> items,
DataSet<scala.Tuple2<Object,ALS.OutBlockInformation>> itemOut,
DataSet<scala.Tuple2<Object,ALS.InBlockInformation>> userIn,
int factors,
double lambda,
Partitioner<Object> blockIDPartitioner)
Calculates a single half step of the ALS optimization.
|
Constructor and Description |
---|
BlockedFactorization(DataSet<scala.Tuple2<Object,double[][]>> userFactors,
DataSet<scala.Tuple2<Object,double[][]>> itemFactors) |
BlockedFactorization(DataSet<scala.Tuple2<Object,double[][]>> userFactors,
DataSet<scala.Tuple2<Object,double[][]>> itemFactors) |
Factorization(DataSet<ALS.Factors> userFactors,
DataSet<ALS.Factors> itemFactors) |
Factorization(DataSet<ALS.Factors> userFactors,
DataSet<ALS.Factors> itemFactors) |
Modifier and Type | Method and Description |
---|---|
DataSet<Object> |
MultipleLinearRegression.squaredResidualSum(DataSet<LabeledVector> input) |
Modifier and Type | Method and Description |
---|---|
scala.Option<DataSet<WeightVector>> |
MultipleLinearRegression.weightsOption() |
Modifier and Type | Method and Description |
---|---|
DataSet<Object> |
MultipleLinearRegression.squaredResidualSum(DataSet<LabeledVector> input) |
Copyright © 2014–2017 The Apache Software Foundation. All rights reserved.