一、什么是RDD
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark. Represents an immutable,partitioned collection of elements that can be operated on in parallel.
RDD是一个弹性的分布式的数据集,是spark的基本抽象,RDD是不可变的,并且它由多个partition构成(可能分布在多台机器上,可以存memory上,也可以存disk里等等),可以进行并行操作
弹性:分布式计算时可容错
不可变:一旦产生就不能被改变
RDD源码如下:(https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/RDD.scala)
abstract class RDD[T: ClassTag](
@transient private var _sc: SparkContext,
@transient private var deps: Seq[Dependency[_]]
) extends Serializable with Logging {
.....
}
解读:
1)抽象类:不能直接使用,需要借助于子类实现,使用时直接使用其子类即可
2)序列化:在分布式计算框架里,序列化框架性能的好坏直接影响整个框架性能的优劣
3)logging:日志记录,2.0版本后不自带,需要自己写一个
4)T:泛型 支持各种数据类型
5)sparkcontext
6)@transient
二、RDD的5大特点
1)A list of partitions
RDD由很多partition构成,在spark中,计算式,有多少partition就对应有多少个task来执行
2)A function for computing each split
对RDD做计算,相当于对RDD的每个split或partition做计算
3)A list of dependencies on other RDDs
RDD之间有依赖关系,可溯源
4)Optionally, a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned)
如果RDD里面存的数据是key-value形式,则可以传递一个自定义的Partitioner进行重新分区,比如可以按key的hash值分区
5)Optionally, a list of preferred locations to compute each split on (e.g. block locations for an HDFS file)
最优的位置去计算,也就是数据的本地性
计算每个split时,在split所在机器的本地上运行task是最好的,避免了数据的移动;split有多个副本,所以preferred location不止一个
数据在哪里,应优先把作业调度到数据所在机器上,减少数据的IO和网络传输,这样才能更好地减少作业运行时间(木桶原理:作业运行时间取决于运行最慢的task所需的时间),提高性能
三、RDD5大特性在源码中的体现
/**
* :: DeveloperApi ::
* Implemented by subclasses to compute a given partition.
*/
@DeveloperApi
def compute(split: Partition, context: TaskContext): Iterator[T]
(特性2)compute函数的入参必然是partition,因为对RDD做计算相当于对每个partition做计算
/**
* Implemented by subclasses to return the set of partitions in this RDD. This method will only
* be called once, so it is safe to implement a time-consuming computation in it.
*
* The partitions in this array must satisfy the following property:
* `rdd.partitions.zipWithIndex.forall { case (partition, index) => partition.index == index }`
*/
protected def getPartitions: Array[Partition]
(特性1)getPartitions返回的必然是一系列Partition类型的数据组成的数组
/**
* Implemented by subclasses to return how this RDD depends on parent RDDs. This method will only
* be called once, so it is safe to implement a time-consuming computation in it.
*/
protected def getDependencies: Seq[Dependency[_]] = deps
(特性3)RDD之间有依赖关系
/**
* Optionally overridden by subclasses to specify placement preferences.
*/
protected def getPreferredLocations(split: Partition): Seq[String] = Nil
(特性5)
/** Optionally overridden by subclasses to specify how they are partitioned. */
@transient val partitioner: Option[Partitioner] = None
(特性4)