pyspark: write to files after aggregation with reduceByKey - pyspark

My code looks like this:
sc = SparkContext("local", "App Name")
eventRDD = sc.textFile("file:///home/cloudera/Desktop/python/event16.csv")
outRDDExt = eventRDD.filter(lambda s: "Topic" in s).map(lambda s: s.split('|'))
outRDDExt2 = outRDDExt.keyBy(lambda x: (x[1],x[2][:-19]))
outRDDExt3 = outRDDExt2.mapValues(lambda x: 1)
outRDDExt4 = outRDDExt3.reduceByKey(lambda x,y: x + y)
outRDDExt4.saveAsTextFile("file:///home/cloudera/Desktop/python/outDir1")
The current output file looks like this:
((u'Topic', u'2017/05/08'), 15)
What I want in my file is this:
u'Topic', u'2017/05/08', 15
How do I get the above output (i.e get rid of the tuples etc from my current output ?

You can manually expand the tuple and join all elements as string
outRDDExt4\
.map(lambda row : ",".join([row[0][1],row[0][1],str(row[1])])\
.saveAsTextFile("file:///home/cloudera/Desktop/python/outDir1")

Related

How to set type of dataset when applying transformations and how to implement transformations without using spark.sql.functions._?

I am a beginner for Scala and have been working on the following problem:
Example dataset named as given_dataset with player number and points scored
|player_no| |points|
1 25.0
1 20.0
1 21.0
2 15.0
2 18.0
3 24.0
3 25.0
3 29.0
Problem 1:
I have a dataset and need to calculate total points scored, average points per game, and number of games played. I am unable to explicitly set the data type to "double", "int", "float", when I apply the transformations. (Perhaps this is because they are untyped transformations?) Would anyone be able to help on this and correct my error?
No data type specified (but code is able to run)
val total_points_dataset = given_dataset.groupBy($"player_no").sum("points").orderBy("player_no")
val games_played_dataset = given_dataset.groupBy($"player_no").count().orderBy("player_no")
val avg_points_dataset = given_dataset.groupBy($"player_no").avg("points").orderBy("player_no")
Please note that I would like to retain the player number as I plan to merge total_points_dataset, games_played_dataset, and avg_points_dataset together.
Data type specified, but code crashes!
val total_points_dataset = given_dataset.groupBy($"player_no").sum("points").as[Double].orderBy("player_no")
val games_played_dataset = given_dataset.groupBy($"player_no").count().as[Int].orderBy("player_no")
val avg_points_dataset = given_dataset.groupBy($"player_no").avg("points").as[Double].orderBy("player_no")
Problem 2:
I would like to implement the above without using the library spark.sql.functions e.g. through functions such as map, groupByKey etc. If possible, could anyone provide an example for this and point me towards the right direction?
If you don't want to use import org.apache.spark.sql.types.{FloatType, IntegerType, StructType} then you have to cast it either at the time of reading or using as[(Int, Double)] in the dataset. Below is the example while reading from CSV file for your dataset:
/** A function that splits a line of input into (player_no, points) tuples. */
def parseLine(line: String): (Int, Float) = {
// Split by commas
val fields = line.split(",")
// Extract the player_no and points fields, and convert to integer & float
val player_no = fields(0).toInt
val points = fields(1).toFloat
// Create a tuple that is our result.
(player_no, points)
}
And then read as below:
val sc = new SparkContext("local[*]", "StackOverflow75354293")
val lines = sc.textFile("data/stackoverflowdata-noheader.csv")
val dataset = lines.map(parseLine)
val total_points_dataset2 = dataset.reduceByKey((x, y) => x + y)
val total_points_dataset2_sorted = total_points_dataset2.sortByKey(ascending = true)
total_points_dataset2_sorted.foreach(println)
val games_played_dataset2 = dataset.countByKey().toList.sorted
games_played_dataset2.foreach(println)
val avg_points_dataset2 =
dataset
.mapValues(x => (x, 1))
.reduceByKey((x, y) => (x._1 + y._1, x._2 + y._2))
.mapValues(x => x._1 / x._2)
.sortByKey(ascending = true)
avg_points_dataset2.collect().foreach(println)
I locally tried running both ways and both are working fine, we can check the below output also:
(3,78.0)
(1,66.0)
(2,33.0)
(1,3)
(2,2)
(3,3)
(1,22.0)
(2,16.5)
(3,26.0)
For details you can see it on mysql page
Regarding "Problem 1" try
val total_points_dataset = given_dataset.groupBy($"player_no").sum("points").as[(Int, Double)].orderBy("player_no")
val games_played_dataset = given_dataset.groupBy($"player_no").count().as[(Int, Long)].orderBy("player_no")
val avg_points_dataset = given_dataset.groupBy($"player_no").avg("points").as[(Int, Double)].orderBy("player_no")

How to use Scala format and substitution interpolation together?

I am new to scala and spark and have a requirement where i want to use both format and substitution in a single println statement.
Here is the code:
val results = minTempRdd.collect()
for(result <- results.sorted){
val station = result._1
val temp = result._2
println(f" StId $station Temp $temp%.2f F")
}
where results is an RDD having structure (stationId, Temperature).
Now i want to convert this code into one liner. I tried the following code:
val results = minTempRdd.collect()
results.foreach(x => println(" stId "+x._1+" temp = "+x._2))
It works fine, but i am not able to format the second value in tuple here.
Any suggestions, how can we achieve this?
The first way is to use curly brackets inside interpolation, which allow to pass arbitrary expressions instead of variables:
println(f" StId ${result._1} Temp ${result._2}%.2fF")
The second way is to unpack the tuple:
for ((station, temp) <- results.sorted)
println(f" StId $station Temp $temp%.2fF")
Or:
results.sorted.foreach { case (station, temp) =>
println(" stId "+x._1+" temp = "+x._2)
}

replace multiple occurrence of duplicate string in Scala with empty

I have a string as
something,'' something,nothing_something,op nothing_something,'' cat,cat
I want to achieve my output as
'' something,op nothing_something,cat
Is there any way to achieve it?
If I understand your requirement correctly, here's one approach with the following steps:
Split the input string by "," and create a list of indexed-CSVs and convert it to a Map
Generate 2-combinations of the indexed-CSVs
Check each of the indexed-CSV pairs and capture the index of any CSV which is contained within the other CSV
Since the CSVs corresponding to the captured indexes are contained within some other CSV, removing these indexes will result in remaining indexes we would like to keep
Use the remaining indexes to look up CSVs from the CSV Map and concatenate them back to a string
Here is sample code applying to a string with slightly more general comma-separated values:
val str = "cats,a cat,cat,there is a cat,my cat,cats,cat"
val csvIdxList = (Stream from 1).zip(str.split(",")).toList
val csvMap = csvIdxList.toMap
val csvPairs = csvIdxList.combinations(2).toList
val csvContainedIdx = csvPairs.collect{
case List(x, y) if x._2.contains(y._2) => y._1
case List(x, y) if y._2.contains(x._2) => x._1
}.
distinct
// csvContainedIdx: List[Int] = List(3, 6, 7, 2)
val csvToKeepIdx = (1 to csvIdxList.size) diff csvContainedIdx
// csvToKeepIdx: scala.collection.immutable.IndexedSeq[Int] = Vector(1, 4, 5)
val strDeduped = csvToKeepIdx.map( csvMap.getOrElse(_, "") ).mkString(",")
// strDeduped: String = cats,there is a cat,my cat
Applying the above to your sample string something,'' something,nothing_something,op nothing_something would yield the expected result:
strDeduped: String = '' something,op nothing_something
First create an Array of words separated by commas using split command on the given String, and do other operations using filter and mkString as below:
s.split(",").filter(_.contains(' ')).mkString(",")
In Scala REPL:
scala> val s = "something,'' something,nothing_something,op nothing_something"
s: String = something,'' something,nothing_something,op nothing_something
scala> s.split(",").filter(_.contains(' ')).mkString(",")
res27: String = '' something,op nothing_something
As per Leo C comment, I tested it as below with some other String:
scala> val s = "something,'' something anything anything anything anything,nothing_something,op op op nothing_something"
s: String = something,'' something anything anything anything anything,nothing_something,op op op nothing_something
scala> s.split(",").filter(_.contains(' ')).mkString(",")
res43: String = '' something anything anything anything anything,op op op nothing_something

appending data to hbase table from a spark rdd using scala

I am trying to add data to a HBase Table. I have done the following so far:
def convert (a:Int,s:String) : Tuple2[ImmutableBytesWritable,Put]={
val p = new Put(a.toString.getBytes())
p.add(Bytes.toBytes("ColumnFamily"),Bytes.toBytes("col_2"), s.toString.getBytes())//a.toString.getBytes())
println("the value of a is: " + a)
new Tuple2[ImmutableBytesWritable,Put](new ImmutableBytesWritable(Bytes.toBytes(a)), p);
}
new PairRDDFunctions(newrddtohbaseLambda.map(x=>convert(x, randomstring))).saveAsHadoopDataset(jobConfig)
newrddtohbaseLambda is this:
val x = 12
val y = 15
val z = 25
val newarray = Array(x,y,z)
val newrddtohbaseLambda = sc.parallelize(newarray)
"randomstring" is this
val randomstring = "abc, xyz, dfg"
Now, what this does is, it adds abc,xyz,dfg to rows 12, 15 and 25 after deleting the already present values in these rows. I want that value to be present and add abc,xyz,dfg instead replace. How can I get it done? Any help would be appreciated.

How to convert values from a file to a Map in spark-scala?

I have my values in a file as comma separated. Now, i want this data to be converted into a key value pairs(Maps). I know that we can split the values and store in a Array like below.
val prop_file = sc.textFile("/prop_file.txt")
prop_file.map(_.split(",").map(s => Array(s)))
Is there any way to store the data as Map in spark-scala ?
Assuming that each line of your file contain 2 values where first word is Key and next is value, separated by space: -
A 1
B 2
C 3
Something like this can be done: -
val file = sc.textFile("/prop_file.txt")
val words = file.flatMap(x => createDataMap(x))
And here is your function - createDataMap
def createDataMap(data:String): Map[String, String] = {
val array = data.split(",")
//Creating the Map of values
val dataMap = Map[String, String](
(array(0) -> array(1)),
(array(2) -> array(3))
)
return dataMap
}
Next for retrieving the key/ values from the RDD you can leverage following operations: -
//This will print all elements of RDD
words.foreach(f=>println(f))
//Or You can filter the elements too.
words.filter(f=>f._1.equals("A"))
Sumit, I have used the below code to retrieve the value for a particular key and it worked.
val words = file.flatMap(x => createDataMap(x)).collectAsMap
val valueofA = props("A")
print(valueofA)
This gives me 1 as a result