1. 程式人生 > >Spark英文單詞分析案例

Spark英文單詞分析案例

1、有如下檔案testdata.txt()

At a high level

every Spark application consists of a driver program that runs the user’s main function and executes various parallel operations on a cluster

The main abstraction Spark provides is a resilient distributed dataset (RDD)

which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel

RDDs are created by starting with a file in the Hadoop file system (or any other Hadoop-supported file system)

or an existing Scala collection in the driver program

and transforming it

Users may also ask Spark to persist an RDD in memory

allowing it to be reused efficiently across parallel operations. Finally

RDDs automatically recover from node failures

要求完成如下功能

(1)篩選出包含Spark的行,並統計行數

(2)輸出包含單詞最多的那一行的單詞數

(3)統計資料中包含“a”的行數和包含“b”的行數

import org.apache.spark.{Accumulator, SparkConf, SparkContext}
import org.apache.spark.rdd.RDD

object work02 {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setAppName("localtion").setMaster("local[*]")
    val sc=new SparkContext(conf)
    //獲取資料
    val user1:RDD[String]=sc.textFile("E://aaa//testdata.txt",1)
    //1.統計Spark行數
    val result=user1.map((lines:String)=>if(lines.contains("Spark"))
      ("Spark",1) else (" ",0)).filter(_.equals(("Spark",1)))
    val result1=result.reduceByKey(_+_)
    //result1.foreach(println)
    //2.單詞最多的哪一行的單詞數
    var i:Int=0
    val result2=user1.map((lines:String)=>{
      val str=lines.split(" ").toList
      i=i+1
      (str,i)
    })
    val result3=result2.map(temp=>(temp._1.length,temp._2)).sortByKey(false).take(1)
    //result3.foreach(println)
  //3.統計資料中包含“a”的行數和包含“b”的行數
  val result4=user1.map((lines:String)=>if(lines.contains("a"))
    ("a",1) else (" ",0)).filter(_.equals(("a",1)))
    val result5=result4.reduceByKey(_+_)
    result5.foreach(println)

    val result6=user1.map((lines:String)=>if(lines.contains("b"))
      ("b",1) else (" ",0)).filter(_.equals(("b",1)))
    val result7=result6.reduceByKey(_+_)
    result7.foreach(println)
  }

}