Spark编程模型解析

统计地区人数

  1. 提取出第四个字段,然后是一个wordcount程序;
  2. 具体代码
package io.github.sparktrain

import org.apache.spark.{SparkConf, SparkContext}

/**
  * Created by sunyonggang on 16-4-12.
  */
class PersonCount {

}

object PersonCount {
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("PersonCountByDistrict").setMaster("local")
    val sc = new SparkContext(conf)

    val raw = sc.textFile("/home/sunyonggang/sparkdata/users.txt")
    val rawMapReduce = raw.map(line => splitGet3(line)).map(a3 => (a3, 1)).reduceByKey(_ + _)
    rawMapReduce.saveAsTextFile("/home/sunyonggang/sparkdata/resultPerson")

  }
  def splitGet3(line : String) : String = {
    val a = line.split(",")
    a(3)
  }
}

3.中文字符需要特别处理的问题,我没遇到

按手机号码(第3个字段),前三位数字分组,统计每个分组的数量,并按手机号码前三位数字排序

  1. 提取第三个字段,取substring,然后后面是一个wordcount + sortByKeys
  2. 具体代码
package io.github.sparktrain

import org.apache.spark.{SparkConf, SparkContext}

/**
  * Created by sunyonggang on 16-4-12.
  */
class PhoneGroup {

}

object PhoneGroup {
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("PhoneGroupSorted").setMaster("local")
    val sc = new SparkContext(conf)
   // sc.addJar("sunyonggang@gg01:/home/sunyonggang/IdeaProjects/SparkTest/target/scala-2.1.0/sparktest_2.10-1.0.jar")

    val raw = sc.textFile("/home/sunyonggang/sparkdata/users.txt")
    val rmr = raw.map(line => splitLineWithPhone(line)).map(a => (a, 1)).reduceByKey(_ + _).sortByKey()
    rmr.saveAsTextFile("/home/sunyonggang/sparkdata/resultPhone")

  }
  def splitLineWithPhone(line : String) : String = {
    val a = line.split(",")
    a(2).substring(0,3)
  }
}

3.无特殊问题需要处理

合并结果

  1. 将结果合并打印出来
(龙山县,12445)
(永顺县,12146)
(花垣县,10453)
(保靖县,7258)
(吉首市,22435)
(凤凰县,10548)
(泸溪县,7102)
(古丈县,3721)
(134,5364)
(135,8059)
(136,1902)
(137,12438)
(139,7744)
(147,2921)
(150,11100)
(151,9072)
(152,8147)
(157,712)
(158,7192)
(159,4850)
(182,1740)
(183,1)
(187,4735)
(188,131)

提交代码与运行环境

  1. 代码如上
  2. 运行截图
sunyonggang@gg01:~$ spark-submit --class io.github.sparktrain.PersonCount /home/sunyonggang/IdeaProjects/SparkTest/target/scala-2.10/sparktest_2.10-1.0.jar 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/04/18 07:41:01 INFO SparkContext: Running Spark version 1.4.0
16/04/18 07:41:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/04/18 07:41:02 INFO SecurityManager: Changing view acls to: sunyonggang
16/04/18 07:41:02 INFO SecurityManager: Changing modify acls to: sunyonggang
16/04/18 07:41:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sunyonggang); users with modify permissions: Set(sunyonggang)
16/04/18 07:41:02 INFO Slf4jLogger: Slf4jLogger started
16/04/18 07:41:02 INFO Remoting: Starting remoting
16/04/18 07:41:02 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.199.150:58325]
16/04/18 07:41:02 INFO Utils: Successfully started service 'sparkDriver' on port 58325.
16/04/18 07:41:03 INFO SparkEnv: Registering MapOutputTracker
16/04/18 07:41:03 INFO SparkEnv: Registering BlockManagerMaster
16/04/18 07:41:03 INFO DiskBlockManager: Created local directory at /tmp/spark-d67e031f-2814-44ff-9d61-5c1236b97014/blockmgr-230dc1be-f663-4df9-bed4-20b5687a0925
16/04/18 07:41:03 INFO MemoryStore: MemoryStore started with capacity 267.3 MB
16/04/18 07:41:03 INFO HttpFileServer: HTTP File server directory is /tmp/spark-d67e031f-2814-44ff-9d61-5c1236b97014/httpd-120951ce-1a2c-4aa0-b459-0fe29a319045
16/04/18 07:41:03 INFO HttpServer: Starting HTTP Server
16/04/18 07:41:03 INFO Utils: Successfully started service 'HTTP file server' on port 39301.
16/04/18 07:41:03 INFO SparkEnv: Registering OutputCommitCoordinator
16/04/18 07:41:04 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/04/18 07:41:04 INFO SparkUI: Started SparkUI at http://192.168.199.150:4040
16/04/18 07:41:04 INFO SparkContext: Added JAR file:/home/sunyonggang/IdeaProjects/SparkTest/target/scala-2.10/sparktest_2.10-1.0.jar at http://192.168.199.150:39301/jars/sparktest_2.10-1.0.jar with timestamp 1460936464170
16/04/18 07:41:04 INFO Executor: Starting executor ID driver on host localhost
16/04/18 07:41:05 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43564.
16/04/18 07:41:05 INFO NettyBlockTransferService: Server created on 43564
16/04/18 07:41:05 INFO BlockManagerMaster: Trying to register BlockManager
16/04/18 07:41:05 INFO BlockManagerMasterEndpoint: Registering block manager localhost:43564 with 267.3 MB RAM, BlockManagerId(driver, localhost, 43564)
16/04/18 07:41:05 INFO BlockManagerMaster: Registered BlockManager
16/04/18 07:41:06 INFO MemoryStore: ensureFreeSpace(157248) called with curMem=0, maxMem=280248975
16/04/18 07:41:06 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 153.6 KB, free 267.1 MB)
16/04/18 07:41:06 INFO MemoryStore: ensureFreeSpace(14257) called with curMem=157248, maxMem=280248975
16/04/18 07:41:06 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 267.1 MB)
16/04/18 07:41:06 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:43564 (size: 13.9 KB, free: 267.3 MB)
16/04/18 07:41:06 INFO SparkContext: Created broadcast 0 from textFile at PersonCount.scala:17
16/04/18 07:41:06 INFO FileInputFormat: Total input paths to process : 1
16/04/18 07:41:07 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/04/18 07:41:07 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16/04/18 07:41:07 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16/04/18 07:41:07 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16/04/18 07:41:07 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
16/04/18 07:41:07 INFO SparkContext: Starting job: saveAsTextFile at PersonCount.scala:19
16/04/18 07:41:07 INFO DAGScheduler: Registering RDD 3 (map at PersonCount.scala:18)
16/04/18 07:41:07 INFO DAGScheduler: Got job 0 (saveAsTextFile at PersonCount.scala:19) with 1 output partitions (allowLocal=false)
16/04/18 07:41:07 INFO DAGScheduler: Final stage: ResultStage 1(saveAsTextFile at PersonCount.scala:19)
16/04/18 07:41:07 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
16/04/18 07:41:07 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
16/04/18 07:41:07 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at PersonCount.scala:18), which has no missing parents
16/04/18 07:41:07 INFO MemoryStore: ensureFreeSpace(3968) called with curMem=171505, maxMem=280248975
16/04/18 07:41:07 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.9 KB, free 267.1 MB)
16/04/18 07:41:07 INFO MemoryStore: ensureFreeSpace(2281) called with curMem=175473, maxMem=280248975
16/04/18 07:41:07 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.2 KB, free 267.1 MB)
16/04/18 07:41:07 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:43564 (size: 2.2 KB, free: 267.3 MB)
16/04/18 07:41:07 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:874
16/04/18 07:41:07 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at PersonCount.scala:18)
16/04/18 07:41:07 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/04/18 07:41:07 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1470 bytes)
16/04/18 07:41:07 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/04/18 07:41:07 INFO Executor: Fetching http://192.168.199.150:39301/jars/sparktest_2.10-1.0.jar with timestamp 1460936464170
16/04/18 07:41:08 INFO Utils: Fetching http://192.168.199.150:39301/jars/sparktest_2.10-1.0.jar to /tmp/spark-d67e031f-2814-44ff-9d61-5c1236b97014/userFiles-d96c1324-1abf-46df-b152-5c1d2c4058cf/fetchFileTemp5786246817305865613.tmp
16/04/18 07:41:08 INFO Executor: Adding file:/tmp/spark-d67e031f-2814-44ff-9d61-5c1236b97014/userFiles-d96c1324-1abf-46df-b152-5c1d2c4058cf/sparktest_2.10-1.0.jar to class loader
16/04/18 07:41:08 INFO HadoopRDD: Input split: file:/home/sunyonggang/sparkdata/users.txt:0+5793569
16/04/18 07:41:09 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2001 bytes result sent to driver
16/04/18 07:41:09 INFO DAGScheduler: ShuffleMapStage 0 (map at PersonCount.scala:18) finished in 1.467 s
16/04/18 07:41:09 INFO DAGScheduler: looking for newly runnable stages
16/04/18 07:41:09 INFO DAGScheduler: running: Set()
16/04/18 07:41:09 INFO DAGScheduler: waiting: Set(ResultStage 1)
16/04/18 07:41:09 INFO DAGScheduler: failed: Set()
16/04/18 07:41:09 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1400 ms on localhost (1/1)
16/04/18 07:41:09 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/04/18 07:41:09 INFO DAGScheduler: Missing parents for ResultStage 1: List()
16/04/18 07:41:09 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at saveAsTextFile at PersonCount.scala:19), which is now runnable
16/04/18 07:41:09 INFO MemoryStore: ensureFreeSpace(127560) called with curMem=177754, maxMem=280248975
16/04/18 07:41:09 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 124.6 KB, free 267.0 MB)
16/04/18 07:41:09 INFO MemoryStore: ensureFreeSpace(42784) called with curMem=305314, maxMem=280248975
16/04/18 07:41:09 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 41.8 KB, free 266.9 MB)
16/04/18 07:41:09 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:43564 (size: 41.8 KB, free: 267.2 MB)
16/04/18 07:41:09 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:874
16/04/18 07:41:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at saveAsTextFile at PersonCount.scala:19)
16/04/18 07:41:09 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
16/04/18 07:41:09 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, PROCESS_LOCAL, 1231 bytes)
16/04/18 07:41:09 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
16/04/18 07:41:09 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
16/04/18 07:41:09 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 4 ms
16/04/18 07:41:09 INFO FileOutputCommitter: Saved output of task 'attempt_201604180741_0001_m_000000_1' to file:/home/sunyonggang/sparkdata/resultPerson/_temporary/0/task_201604180741_0001_m_000000
16/04/18 07:41:09 INFO SparkHadoopMapRedUtil: attempt_201604180741_0001_m_000000_1: Committed
16/04/18 07:41:09 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1828 bytes result sent to driver
16/04/18 07:41:10 INFO DAGScheduler: ResultStage 1 (saveAsTextFile at PersonCount.scala:19) finished in 0.373 s
16/04/18 07:41:10 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 403 ms on localhost (1/1)
16/04/18 07:41:10 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
16/04/18 07:41:10 INFO DAGScheduler: Job 0 finished: saveAsTextFile at PersonCount.scala:19, took 2.532049 s
16/04/18 07:41:10 INFO SparkContext: Invoking stop() from shutdown hook
16/04/18 07:41:10 INFO SparkUI: Stopped Spark web UI at http://192.168.199.150:4040
16/04/18 07:41:10 INFO DAGScheduler: Stopping DAGScheduler
16/04/18 07:41:10 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/04/18 07:41:10 INFO Utils: path = /tmp/spark-d67e031f-2814-44ff-9d61-5c1236b97014/blockmgr-230dc1be-f663-4df9-bed4-20b5687a0925, already present as root for deletion.
16/04/18 07:41:10 INFO MemoryStore: MemoryStore cleared
16/04/18 07:41:10 INFO BlockManager: BlockManager stopped
16/04/18 07:41:10 INFO BlockManagerMaster: BlockManagerMaster stopped
16/04/18 07:41:10 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/04/18 07:41:10 INFO SparkContext: Successfully stopped SparkContext
16/04/18 07:41:10 INFO Utils: Shutdown hook called
16/04/18 07:41:10 INFO Utils: Deleting directory /tmp/spark-d67e031f-2814-44ff-9d61-5c1236b97014

问题

  1. 以上运行都为集群中单机运行结果;
  2. 以整个集群运行时会出现

我将setMaster("spark://gg01:7077")之后的运行结果

 java.lang.ClassNotFoundException: io.github.sparktrain.PhoneGroup$$anonfun$2
 

这个问题不知道怎么解决?
补充:问题的话,从本地读取存在问题,但假如讲文件放倒hdfs上读取的话没有问题

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,324评论 5 476
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,303评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,192评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,555评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,569评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,566评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,927评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,583评论 0 257
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,827评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,590评论 2 320
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,669评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,365评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,941评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,928评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,159评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,880评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,399评论 2 342

推荐阅读更多精彩内容