I am new to Spark/Scala Programming.I am able to do the set up using the maven and able to run the sample word count program.
I am having 2 questions over here for both running in spark environment/ in Windows local:
1.How the scala program is identifying the input.
2.How to write the output into text file.
Here is my code
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD.rddToPairRDDFunctions
object WordCount {
def main(args: Array[String]) = {
//Start the Spark context
val conf = new SparkConf()
.setAppName("WordCount")
.setMaster("local")
val sc = new SparkContext(conf)
//Read some example file to a test RDD
val textFile = sc.textFile("file:/home/root1/Avinash/data.txt")
val counts = textFile.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _)
counts.foreach(println)
counts.collect()
counts.saveAsTextFile("file:/home/root1/Avinash/output")
}
}
When I place the file in file:/home/root1/Avinash/data.txt and try to run it didnt work.Only when i place the data.txt in /home/root1/softs/spark-1.6.1/bin or inside the project folder in workspace it is trying to take the input.
Similarly, when I am trying to write into output using counts.saveAsTextFile("file:/home/root1/Avinash/output"), it is not writing and instead it is throwing the error as
Exception in thread "main" java.io.IOException: No FileSystem for scheme: D.
Please help me in resolving this!!.
you suppose to use /// on file. this is an example
val textFile = sc.textFile("file:///home/root1/Avinash/data.txt")
val counts = textFile.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _).cache()
counts.foreach(println)
//counts.collect()
counts.saveAsTextFile("file:///home/root1/Avinash/output")
use cache to avoid compute every time you are doing action on RDD if the file is big
Related
I have many text file in local directory. Spark Program to read all the files and store it into database. For the moment, trying to read the files using text file stream not working.
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.dstream.DStream
/**
* Main Program
*/
object SparkMain extends App {
// Create a SparkContext to initialize Spark
val sparkConf: SparkConf =
new SparkConf()
.setMaster("local")
.setAppName("TestProgram")
// Create a spark streaming context with windows period 2 sec
val ssc: StreamingContext =
new StreamingContext(sparkConf, Seconds(2))
// Create text file stream
val sourceDir: String = "D:\\tmpDir"
val stream: DStream[String] = ssc.textFileStream(sourceDir)
case class TextLine(line: String)
val lineRdd: DStream[TextLine] = stream.map(TextLine)
lineRdd.foreachRDD(rdd => {
rdd.foreach(println)
})
// Start the computation
ssc.start()
// Wait for the computation to terminate
ssc.awaitTermination()
}
Input:
//1.txt
Hello World
Nothing print when stream the streaming. What is wrong in it?
TextFileStreaming does not read the file that is already present in the directory. Start the program and create a new file or move the file from any other directory. The following program is simple word count for text file streaming
val sourceDir: String = "path to streaming directory"
val stream: DStream[String] = streamingContext.textFileStream(sourceDir)
case class TextLine(line: String)
val lineRdd: DStream[TextLine] = stream.map(TextLine)
lineRdd.foreachRDD(rdd => {
val words = rdd.flatMap(rdd => rdd.line.split(" "))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
println("=====================")
wordCounts.foreach(println)
println("=====================" + rdd.count())
})
The ouput should be something like this
+++++++++++++++++++++++
=====================0
+++++++++++++++++++++++
(are,1)
(you,1)
(how,1)
(hello,1)
(doing,1)
=====================5
+++++++++++++++++++++++
=====================0
I hope this helps!
This question already has answers here:
Scala project won't compile in Eclipse; "Could not find the main class."
(12 answers)
Closed 5 years ago.
My Scala program:
import org.apache.spark._
import org.apache.spark.SparkContext._
object WordCount {
def main(args: Array[String]) {
val inputFile = args(0)
val outputFile = args(1)
val conf = new SparkConf().setAppName("wordCount")
// Create a Scala Spark Context.
val sc = new SparkContext(conf)
// Load our input data.
val input = sc.textFile(inputFile)
// Split up into words.
val words = input.flatMap(line => line.split(" "))
// Transform into word and count.
val counts = words.map(word => (word, 1)).reduceByKey{case (x, y) => x + y}
// Save the word count back out to a text file, causing evaluation.
counts.saveAsTextFile(outputFile)
}
}
Error I am getting :
Error: Could not find or load main class WordCount
You have not set property of master.
val conf = new SparkConf().setAppName("wordCount").setMaster(local[*])
I think is probably something with the classpath, not correctly set up.
If you're using Intellij, mark that directory as source root, could do the trick.
I'm trying to implement simple WordCount in Scala + Spark. Here is my code
object FirstObject {
def main(args: Array[String]) {
val input = "/Data/input"
val conf = new SparkConf().setAppName("Simple Application")
.setMaster("spark://192.168.1.162:7077")
val sparkContext = new SparkContext(conf)
val text = sparkContext.textFile(input).cache()
val wordCounts = text.flatMap(line => line.split(" "))
.map(word => (word,1))
.reduceByKey((a,b) => a+b)
.sortByKey()
wordCounts.saveAsTextFile("/Data/output")
}
This job is working for 54s, and finally do nothing. Is is not writing output to /Data/output
Also if I replace saveAsTextFile with forEach(println) it is produce desired output.
You should check your user rights for /data/output folder.
This folder should have writing rights for your specific user.
I have an implementation of WordCount that I submit on an apache-spark cluster.
I was wondering, if tasks are launched on executors that have two cores, will they run concurrently on those two cores?
I've seen this question, but I'm not sure whether or not I can apply the answer to my case.
import org.apache.spark._
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext._
object WordCount {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("WordCount")
val spark = new SparkContext(conf)
val filename = if (args(0).length > 0) args(0) else "hdfs://x.x.x.x:60070/tortue/wordcount"
val textFile = spark.textFile(filename)
val counts = textFile.flatMap(line => line.split(" "))
.map (word => (word, 1))
.reduceByKey(_ + _)
counts.saveAsTextFile("hdfs://x.x.x.x:60070/tortue/wcresults")
spark.stop()
}
}
It depends on how many cores Spark is configured to use on the executors, spark.executor.cores is the parameter and its documented in http://spark.apache.org/docs/latest/configuration.html .
I get the following error
Exception in thread "main" java.io.IOException: Not a file: hdfs://quickstart.cloudera:8020/user/cloudera/linkage/out1
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:320)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:180)
when launching the following command
spark-submit --class spark00.DataAnalysis1 --master local sproject1.jar linkage linkage/out1
The two last arguments (linkage and linkage/out1) are HDFS directories, the first contains several CSV files, the second doesn't exist, I assume that it will be automatically created.
The following code has been tested successfully with REPL (Spark 1.1, Scala 2.10.4), except of course the saveAsTextFile() part. I've followed the step-by-step method explained in O'Reilly's "Advanced Analytics with Spark" book.
Since it worked on REPL, I wanted to transpose this into a JAR file using Eclipse Juno, with the following code.
package spark00
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object DataAnalysis1 {
case class MatchData(id1: Int, id2: Int, scores: Array[Double], matched: Boolean)
def isHeader(line:String) = line.contains("id_1")
def toDouble(s:String) = {
if ("?".equals(s)) Double.NaN else s.toDouble
}
def parse(line:String) = {
val pieces = line.split(",")
val id1 = pieces(0).toInt
val id2 = pieces(1).toInt
val scores = pieces.slice(2, 11).map(toDouble)
val matched = pieces(11).toBoolean
MatchData(id1, id2, scores, matched)
}
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster("local").setAppName("DataAnalysis1")
val sc = new SparkContext(conf)
// Load our input data.
val rawblocks = sc.textFile(args(0))
// CLEAN-UP
// a. calling !isHeader(): suppress header
val noheader = rawblocks.filter(!isHeader(_))
// b. calling parse(): setting feature types and renaming headers
val parsed = noheader.map(line => parse(line))
// EXPORT CLEAN FILE
parsed.coalesce(1,true).saveAsTextFile(args(1))
}
}
As you can see args(0) should be "linkage" directory, and args(1) is actually the output HDFS directory linkage/out1 based on my spark-submit command above.
I've also tried the last line without coalesce(1,true)
Here's the official RDD type for parsed
parsed: org.apache.spark.rdd.RDD[(Int, Int, Array[Double], Boolean)] = MappedRDD[3] at map at <console>:34
Thank you in advance for your support
Nov 20th: I'm adding this simple Wordcount code that works well when running the spark-submit command the same way as for the code above. Thus, my question will be: why the saveAsTextFile() worked for this one and not the for other code ?
object SpWordCount {
def main(args: Array[String]) {
// Create a Scala Spark Context.
val conf = new SparkConf().setMaster("local").setAppName("wordCount")
val sc = new SparkContext(conf)
// Load our input data.
val input = sc.textFile(args(0))
// Split it up into words.
val words = input.flatMap(line => line.split(" "))
// Transform into word and count.
val counts = words.map(word => (word, 1)).reduceByKey{case (x, y) => x + y}
// Save the word count back out to a text file, causing evaluation.
counts.saveAsTextFile(args(1))
}
}