Add columns in RDD - scala

I am trying to add multiple columns (Int values) to find the highest and lowest selling genre based on global sales.
Format of the table:
Name , Platform , Year ,Genre ,Publisher ,NA_Sales , EU_Sales , JP_Sales , Other_Sales
example data set :
( Formula ) [Global Sales = NA_Sales + EU_Sales + JP_Sales]
example output :
Highest selling Genre: Shooter Global Sale (in millions): 27.57
Lowest selling Genre: Strategy Global Sale (in millions): 0.23
val vgdataLines = sc.textFile("hdfs:///user/ashhall1616/bdc_data/t1/vgsales-small.csv")
val vgdata = vgdataLines.map(_.split(";"))
val GlobalSales = vgdata.map(r => r(3), r(5) + r(6) + r(7)). reduceByKey(_+_)
What I am trying to use here is a reduce by key to reduce the total NA_Sales + EU_Sales + JP_Sales to one value and then reduce by Genre. I created GlobalSales with Genre and total sales. But r(5) + r(6) + r(7) adds the values into a string.
Array[String] = Array(6.855.091.87, 9.034.280.13, 5.895.043.12, 9.673.730.11, 4.42.773.96, 0.180.140, 000.37, 0.20.070, 0.140.320.22, 0.140.110, 0.090.010.15
, 0.020.020.22, 0.140.110, 0.10.130, 0.140.110, 0.110.030, 0.130.020, 0.090.030, 0.060.040, 0.1200)

Using the data from this stackoverflow here- (I believe both the questions are using the same dataset)
Post splitting the data using ;, you get the Array[String] and when you add that while creating tuple, it will append these numbers. you can convert these string to Double while creating tuple.
Code
val data =
"""Gran Turismo 3: A-Spec;PS2;2001;Racing;Sony Computer Entertainment;6.85;5.09;1.87;1.16
|Call of Duty: Modern Warfare 3;X360;2011;Shooter;Activision;9.03;4.28;0.13;1.32
|Pokemon Yellow: Special Pikachu Edition;GB;1998;Role-Playing;Nintendo;5.89;5.04;3.12;0.59
|Call of Duty: Black Ops;X360;2010;Shooter;Activision;9.67;3.73;0.11;1.13
|Pokemon HeartGold/Pokemon SoulSilver;DS;2009;Action;Nintendo;4.4;2.77;3.96;0.77
|High Heat Major League Baseball 2003;PS2;2002;Sports;3DO;0.18;0.14;0;0.05
|Panzer Dragoon;SAT;1995;Shooter;Sega;0;0;0.37;0
|Corvette;GBA;2003;Racing;TDK Mediactive;0.2;0.07;0;0.01""".stripMargin
val vgdataLines = spark.sparkContext.makeRDD(data.split("\n").toSeq)
val vgdata = vgdataLines.map(_.split(";"))
val GlobalSales = vgdata.map(r => (r(3), r(5).toDouble + r(6).toDouble + r(7).toDouble)). reduceByKey(_+_)
GlobalSales.foreach(println)
Output-
(Shooter,27.32)
(Role-Playing,14.05)
(Sports,0.32)
(Action,11.129999999999999)
(Racing,14.079999999999998)
Update-1 as per the ask in comments
println("### min-max ###")
val minSale = GlobalSales.min()(Ordering.by(_._2))
val maxSale = GlobalSales.max()(Ordering.by(_._2))
println(s"Highest selling Genre: '${maxSale._1}' Global Sale (in millions): '${maxSale._2}'.")
println(s"Lowest selling Genre: '${minSale._1}' Global Sale (in millions): '${minSale._2}'.")
Output-
### min-max ###
Highest selling Genre: 'Shooter' Global Sale (in millions): '27.32'.
Lowest selling Genre: 'Sports' Global Sale (in millions): '0.32'.
Some Explaination-
GlobalSales is a RDD[Tuple2[String, Double]. while doing max and min on the tuple it usually order in sequence i.e. compare the first value and then second. In your usecase , you directly want to collect max on the second element of tuple(global sale in ton), Therefore to override
the default behaviour of sorting of tuple, we are using this Ordering.by(_._2)

Related

Spark Exponential Moving Average

I have a dataframe of timeseries pricing data, with an ID, Date and Price.
I need to compute the Exponential Moving Average for the Price Column, and add it as a new column to the dataframe.
I have been using Spark's window functions before, and it looked like a fit for this use case, but given the formula for the EMA:
EMA: {Price - EMA(previous day)} x multiplier + EMA(previous day)
where
multiplier = (2 / (Time periods + 1)) //let's assume Time period is 10 days for now
I got a bit confused as to how can I access to the previous computed value in the column, while actually window-ing over the column.
With a simple moving average, it's simple, since all you need to do is compute a new column while averaging the elements in the window:
var window = Window.partitionBy("ID").orderBy("Date").rowsBetween(-windowSize, Window.currentRow)
dataFrame.withColumn(avg(col("Price")).over(window).alias("SMA"))
But it seems that with EMA its a bit more complicated since at every step I need the previous computed value.
I have also looked at Weighted moving average in Pyspark but I need an approach for Spark/Scala, and for a 10 or 30 days EMA.
Any ideas?
In the end, I've analysed how exponential moving average is implemented in pandas dataframes. Besides the recursive formula which I described above, and which is difficult to implement in any sql or window function(because its recursive), there is another one, which is detailed on their issue tracker:
y[t] = (x[t] + (1-a)*x[t-1] + (1-a)^2*x[t-2] + ... + (1-a)^n*x[t-n]) /
((1-a)^0 + (1-a)^1 + (1-a)^2 + ... + (1-a)^n).
Given this, and with additional spark implementation help from here, I ended up with the following implementation, which is roughly equivalent with doing pandas_dataframe.ewm(span=window_size).mean().
def exponentialMovingAverage(partitionColumn: String, orderColumn: String, column: String, windowSize: Int): DataFrame = {
val window = Window.partitionBy(partitionColumn)
val exponentialMovingAveragePrefix = "_EMA_"
val emaUDF = udf((rowNumber: Int, columnPartitionValues: Seq[Double]) => {
val alpha = 2.0 / (windowSize + 1)
val adjustedWeights = (0 until rowNumber + 1).foldLeft(new Array[Double](rowNumber + 1)) { (accumulator, index) =>
accumulator(index) = pow(1 - alpha, rowNumber - index); accumulator
}
(adjustedWeights, columnPartitionValues.slice(0, rowNumber + 1)).zipped.map(_ * _).sum / adjustedWeights.sum
})
dataFrame.withColumn("row_nr", row_number().over(window.orderBy(orderColumn)) - lit(1))
.withColumn(s"$column$exponentialMovingAveragePrefix$windowSize", emaUDF(col("row_nr"), collect_list(column).over(window)))
.drop("row_nr")
}
(I am presuming the type of the column for which I need to compute the exponential moving average is Double.)
I hope this helps others.

spark-graphx finding the most active user?

I have a graph of this form:
_ 3 _
/' '\
(1) (1)
/ \
1--(2)--->2
I want to count the most active user (who follow the most,here it's user 1 who follows two times user 2 and one time user 3).
My graph is of this form Graph[Int,Int]
val edges = Array(Edge(1,10,1), Edge(10,1,1), Edge(11,1,1), Edge(1,11,1), Edge(1,12,1))
val vertices = Array((12L,12), (10L,10), (11L,11), (1L,1))
val graph = Graph(sc.parallelize(vertices),sc.parallelize(edges),0)
My idea is to use to group srcId for the edges and to count using the iterator and then to sort but I have issues to use the iterator, the type are quite complex:
graph.edges.groupBy(_.dstId).collect() has type:
Array[(org.apache.spark.graphx.VertexId,Iterable[org.apache.spark.graphx.Edge[Int]])]
Any ideas ?
Your idea of grouping by srcId is good, since you are looking for the relation follows and not is followed by (your example uses dstId by the way)
val group = graph.edges.groupBy(_.srcId)
group now contains the edges going out of each vertex. We can now take the sum of the attributes to get the total time the user follows any user.
val followCount = group.map{
case (vertex, edges) => (vertex, edges.map(_.attr).sum)
}.collect
Which produces
Array((10,1), (11,1), (1,3))
Now if you want to extract the user which follows the most, you can simply sort it by descending order and take the head of the list, which will give the most active user.
val mostActiveUser = followCount.sortBy(- _._2).head

How can I split the data based on time in netcdf through SciSpark?

val data = RDD[SciTensor]
data.map(y => {
val time = y("time")}
How we get the units of time and precip long name in SciSpark ?
below showing ncdump results :
time:units = "hours since 1800-01-01 00:00:0.0".
float precip(time, lat, lon) ;
precip:long_name = "Average Monthly Rate of Precipitation"
Thank you for asking the question.
SciSpark is working towards preserving units.
To answer your original question about splitting by time.
Looking at your variable you can definitely split by time, if the time dimension is greater than 1. Otherwise, you don't really need to.
The precip variable array has dimensions time, lat, and lon.
If you want to split by each time epoch you can access the sub-array in each time index like so :
val array = y("time")()
val time1 = array(0)
val time2 = array(1)
val time3 = array(2)
etc.
If you want to extract the sub-arrays in each time dimension and have the RDD be a collection of these sub-arrays you can do that like so :
data.map(y => {
val timeArray = y("time")()
val timeLength = timeArray.shape(0)
(0 until timeLength).map(i => timeArray(i))
})
This will give you an RDD of type RDD[Iterable[AbstractTensor]]
The Iterable[AbstractTensor] corresponds to the original array which you have split by time.
You can go further and call a flatMap to get an RDD of type RDD[AbstractTensor] like so :
data.flatMap(y => {
val timeArray = y("time")()
val timeLength = timeArray.shape(0)
(0 until timeLength).map(i => timeArray(i))
})
Make sure you are using the latest version of SciSpark.
Some of the indexing functionality is recently introduced.

partial Distance Based RDA - Centroids vanished from Plot

I am trying to fir a partial db-RDA with field.ID to correct for the repeated measurements character of the samples. However including Condition(field.ID) leads to Disappearance of the centroids of the main factor of interest from the plot (left plot below).
The Design: 12 fields have been sampled for species data in two consecutive years, repeatedly. Additionally every year 3 samples from reference fields have been sampled. These three fields have been changed in the second year, due to unavailability of the former fields.
Additionally some environmental variables have been sampled (Nitrogen, Soil moisture, Temperature). Every field has an identifier (field.ID).
Using field.ID as Condition seem to erroneously remove the F1 factor. However using Sampling campaign (SC) as Condition does not. Is the latter the rigth way to correct for repeated measurments in partial db-RDA??
set.seed(1234)
df.exp <- data.frame(field.ID = factor(c(1:12,13,14,15,1:12,16,17,18)),
SC = factor(rep(c(1,2), each=15)),
F1 = factor(rep(rep(c("A","B","C","D","E"),each=3),2)),
Nitrogen = rnorm(30,mean=0.16, sd=0.07),
Temp = rnorm(30,mean=13.5, sd=3.9),
Moist = rnorm(30,mean=19.4, sd=5.8))
df.rsp <- data.frame(Spec1 = rpois(30, 5),
Spec2 = rpois(30,1),
Spec3 = rpois(30,4.5),
Spec4 = rpois(30,3),
Spec5 = rpois(30,7),
Spec6 = rpois(30,7),
Spec7 = rpois(30,5))
data=cbind(df.exp, df.rsp)
dbRDA <- capscale(df.rsp ~ F1 + Nitrogen + Temp + Moist + Condition(SC), df.exp); ordiplot(dbRDA)
dbRDA <- capscale(df.rsp ~ F1 + Nitrogen + Temp + Moist + Condition(field.ID), df.exp); ordiplot(dbRDA)
You partial out variation due to ID and then you try to explain variable aliased to this ID, but it was already partialled out. The key line in the printed output was this:
Some constraints were aliased because they were collinear (redundant)
And indeed, when you ask for details, you get
> alias(dbRDA, names=TRUE)
[1] "F1B" "F1C" "F1D" "F1E"
The F1? variables were constant within ID which already was partialled out, and nothing was left to explain.

How to groupBy groupBy?

I need to map through a List[(A,B,C)] to produce an html report. Specifically, a
List[(Schedule,GameResult,Team)]
Schedule contains a gameDate property that I need to group by on to get a
Map[JodaTime, List(Schedule,GameResult,Team)]
which I use to display gameDate table row headers. Easy enough:
val data = repo.games.findAllByDate(fooDate).groupBy(_._1.gameDate)
Now the tricky bit (for me) is, how to further refine the grouping in order to enable mapping through the game results as pairs? To clarify, each GameResult consists of a team's "version" of the game (i.e. score, location, etc.), sharing a common Schedule gameID with the opponent team.
Basically, I need to display a game result outcome on one row as:
3 London Dragons vs. Paris Frogs 2
Grouping on gameDate let's me do something like:
data.map{case(date,games) =>
// game date row headers
<tr><td>{date.toString("MMMM dd, yyyy")}</td></tr>
// print out game result data rows
games.map{case(schedule,result, team)=>
...
// BUT (result,team) slice is ungrouped, need grouped by Schedule gameID
}
}
In the old version of the existing application (PHP) I used to
for($x = 0; $x < $this->gameCnt; $x = $x + 2) {...}
but I'd prefer to refer to variable names and not the come-back-later-wtf-is-that-inducing:
games._._2(rowCnt).total games._._3(rowCnt).name games._._1(rowCnt).location games._._2(rowCnt+1).total games._._3(rowCnt+1).name
maybe zip or double up for(t1 <- data; t2 <- data) yield(?) or something else entirely will do the trick. Regardless, there's a concise solution, just not coming to me right now...
Maybe I'm misunderstanding your requirements, but it seems to me that all you need is an additional groupBy:
repo.games.findAllByDate(fooDate).groupBy(_._1.gameDate).mapValues(_.groupBy(_._1.gameID))
The result will be of type:
Map[JodaTime, Map[GameId, List[(Schedule,GameResult,Team)]]]
(where GameId is the type of the return type of Schedule.gameId)
Update: if you want the results as pairs, then pattern matching is your friend, as shown by Arjan. This would give us:
val byDate = repo.games.findAllByDate(fooDate).groupBy(_._1.gameDate)
val data = byDate.mapValues(_.groupBy(_._1.gameID).mapValues{ case List((sa, ra, ta), (sb, rb, tb)) => (sa, (ta, ra), (tb, rb)))
This time the result is of type:
Map[JodaTime, Iterable[ (Schedule,(Team,GameResult),(Team,GameResult))]]
Note that this will throw a MatchError if there are not exactly 2 entries with the same gameId. In real code you will definitely want to check for this case.
Ok a soultion from RĂ©gis Jean-Gilles:
val data = repo.games.findAllByDate(fooDate).groupBy(_._1.gameDate).mapValues(_.groupBy(_._1.gameID))
You said it was not correct, maybe you just didnt use it the right way?
Every List in the result is a pair of games with the same GameId.
You could pruduce html like that:
data.map{case(date,games) =>
// game date row headers
<tr><td>{date.toString("MMMM dd, yyyy")}</td></tr>
// print out game result data rows
games.map{case (gameId, List((schedule, result, team), (schedule, result, team))) =>
...
}
}
And since you dont need a gameId, you can return just the paired games:
val data = repo.games.findAllByDate(fooDate).groupBy(_._1.gameDate).mapValues(_.groupBy(_._1.gameID).values)
Tipe of result is now:
Map[JodaTime, Iterable[List[(Schedule,GameResult,Team)]]]
Every list again a pair of two games with the same GameId