spark中的aggregate函数
文章目录
spark中的aggregate函数
学习到spark的RDD行动操作时,有一个函数可让我废了半天脑筋,就是aggregate函数,aggregate的意思是聚合
我们首先来看一下spark官方文档对这一函数的说明:
|
|
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral “zero value”. This function can return a different result type, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an U and one operation for merging two U’s, as in scala.TraversableOnce. Both of these functions are allowed to modify and return their first argument instead of creating a new U to avoid memory allocation.
zeroValue: the initial value for the accumulated result of each partition for the seqOp operator, and also the initial value for the combine results from different partitions for the combOp operator - this will typically be the neutral element (e.g. Nil for list concatenation or 0 for summation)
seqOp: an operator used to accumulate results within a partition
combOp: an associative operator used to combine results from different partitions
aggregate聚合每个分区的元素,然后使用给定的combine函数和一个初始值“zeroValue”来将所有分区的结果再次聚合。这个函数可以返回一个与RDD类型T不同的一个结果类型U。因此,我们需要有一个将T合并到U,并且还要有一个能够合并两个U的操作,就像 scala中的TraversableOnce一样。这两个函数都被允许修改和返回他们的第一个参数,而不是创建一个新的U来避免内存分配。
zeroValue:seqOp,每个分区累加结果的初始值;combOp,不同分区的combine结果的初始值
seqOp:用于分区累加结果
combOp:用于合并来自不同分区的结果
字面描述不太直观,我们举个例子,计算数据的平均数:
|
|
首先我们定义了一个RDD叫input,里面数据是1~4这4个数字,然后result记录input进行aggregate的结果。
aggregate初始值为(0,0)
|
|
文章作者 halface
上次更新 2020-05-14