Spark and Scala: Apply a function to each element of an RDD - scala

I have an RDD of VertexRDD[(VertexId, Long)] structured as follows:
(533, 1)
(571, 2)
(590, 0)
...
Where, each element is composed of the vertex id (533, 571, 590, etc) and its number of outgoing edges (1, 2, 0, etc).
I want to apply a function to each element of this RDD. This function must perform a comparison between the number of outgoing edges and 4 thresholds.
If the number of outgoing edges is less than or equal to one of the 4 thresholds then the corresponding vertex id must be inserted into an Array (or another similar data structure), so as to obtain at the end 4 data structures, each containing the ids of the vertices that satisfy the omparison with the corresponding threshold.
I need that the ids that satisfy the comparison with the same threshold to be accumulated in the same data structure. How can I parallel and implement this approach with Spark and Scala?
My code:
val usersGraphQuery = "MATCH (u1:Utente)-[p:PIU_SA_DI]->(u2:Utente) RETURN id(u1), id(u2), type(p)"
val usersGraph = neo.rels(usersGraphQuery).loadGraph[Any, Any]
val numUserGraphNodes = usersGraph.vertices.count
val numUserGraphEdges = usersGraph.edges.count
val maxNumOutDegreeEdgesPerNode = numUserGraphNodes - 1
// get id and number of outgoing edges of each node from the graph
// except those that have 0 outgoing edges (default behavior of the outDegrees API)
var userNodesOutDegreesRdd: VertexRDD[Int] = usersGraph.outDegrees
/* userNodesOutDegreesRdd.foreach(println)
* Now you can see
* (533, 1)
* (571, 2)
*/
// I also get ids of nodes with zero outgoing edges
var fixedGraph: Graph[Any, Any] = usersGraph.outerJoinVertices(userNodesOutDegreesRdd)( (vid: Any, defaultOutDegrees: Any, outDegOpt: Option[Any]) => outDegOpt.getOrElse(0L) )
var completeUserNodesOutDregreesRdd = fixedGraph.vertices
/* completeUserNodesOutDregreesRdd.foreach(println)
* Now you can see
* (533, 1)
* (571, 2)
* (590, 0) <--
*/
// 4 thresholds that identify the 4 clusters of User nodes based on the number of their outgoing edges
var soglia25: Double = (maxNumOutDegreeEdgesPerNode.toDouble/100)*25
var soglia50: Double = (maxNumOutDegreeEdgesPerNode.toDouble/100)*50
var soglia75: Double = (maxNumOutDegreeEdgesPerNode.toDouble/100)*75
var soglia100: Double = maxNumOutDegreeEdgesPerNode
println("soglie: "+soglia25+", "+soglia50+", "+soglia75+", "+soglia100)
// containers of individual clusters
var lowSAUsers = new ListBuffer[(Long, Any)]()
var mediumLowSAUsers = new ListBuffer[(Long, Any)]()
var mediumHighSAUsers = new ListBuffer[(Long, Any)]()
var highSAUsers = new ListBuffer[(Long, Any)]()
// overall container of the 4 clusters
var clustersContainer = new ListBuffer[ (String, ListBuffer[(Long, Any)]) ]()
// I WANT PARALLEL FROM HERE -----------------------------------------------
// from RDD to Array
var completeUserNodesOutDregreesArray = completeUserNodesOutDregreesRdd.take(numUserGraphNodes.toInt)
// analizzo ogni nodo Utente e lo assegno al cluster di appartenenza
for(i<-0 to numUserGraphNodes.toInt-1) {
// confronto il valore del numero di archi in uscita (convertito in stringa)
// con le varie soglie per determinare in quale classe inserire il relativo nodo Utente
if( (completeUserNodesOutDregreesArray(i)._2).toString().toLong <= soglia25 ) {
println("ok soglia25 ")
lowSAUsers += completeUserNodesOutDregreesArray(i)
}else if( (completeUserNodesOutDregreesArray(i)._2).toString().toLong <= soglia50 ){
println("ok soglia50 ")
mediumLowSAUsers += completeUserNodesOutDregreesArray(i)
}else if( (completeUserNodesOutDregreesArray(i)._2).toString().toLong <= soglia75 ){
println("ok soglia75 ")
mediumHighSAUsers += completeUserNodesOutDregreesArray(i)
}else if( (completeUserNodesOutDregreesArray(i)._2).toString().toLong <= soglia100 ){
println("ok soglia100 ")
highSAUsers += completeUserNodesOutDregreesArray(i)
}
}
// I put each cluster in the final container
clustersContainer += Tuple2("lowSAUsers", lowSAUsers)
clustersContainer += Tuple2("mediumLowSAUsers", mediumLowSAUsers)
clustersContainer += Tuple2("mediumHighSAUsers", mediumHighSAUsers)
clustersContainer += Tuple2("highSAUsers", highSAUsers)
/* clustersContainer.foreach(println)
* Now you can see
* (lowSAUsers,ListBuffer((590,0)))
* (mediumLowSAUsers,ListBuffer((533,1)))
* (mediumHighSAUsers,ListBuffer())
* (highSAUsers,ListBuffer((571,2)))
*/
// ---------------------------------------------------------------------

how about you create an array of tuples representing different bins:
val bins = Seq(0, soglia25, soglia50, soglia75, soglia100).sliding(2)
.map(seq => (seq(0), seq(1))).toArray
Then for each element of your RDD you find a corresponding bin, make it a key, convert id to Seq and reduce by key:
def getBin(bins: Array[(Double, Double)], value: Int): Int = {
bins.indexWhere {case (a: Double, b: Double) => a < value && b >= value}
}
userNodesOutDegreesRdd.map {
case (id, value) => (getBin(bins, value), Seq(id))
}.reduceByKey(_ ++ _)

Related

How to create a DataFrame using a string consisting of key-value pairs?

I'm getting logs from a firewall in CEF Format as a string which looks as:
ABC|XYZ|F123|1.0|DSE|DSE|4|externalId=e705265d0d9e4d4fcb218b cn2=329160 cn1=3053998 dhost=SRV2019 duser=admin msg=Process accessed NTDS fname=ntdsutil.exe filePath=\\Device\\HarddiskVolume2\\Windows\\System32 cs5="C:\\Windows\\system32\\ntdsutil.exe" "ac i ntds" ifm "create full ntdstest3" q q fileHash=80c8b68240a95 dntdom=adminDomain cn3=13311 rt=1610948650000 tactic=Credential Access technique=Credential Dumping objective=Gain Access patternDisposition=Detection. outcome=0
How can I create a DataFrame from this kind of string where I'm getting key-value pairs separated by = ?
My objective is to infer schema from this string using the keys dynamically, i.e extract the keys from left side of the = operator and create a schema using them.
What I have been doing currently is pretty lame(IMHO) and not very dynamic in approach.(because the number of key-value pairs can change as per different type of logs)
val a: String = "ABC|XYZ|F123|1.0|DSE|DCE|4|externalId=e705265d0d9e4d4fcb218b cn2=329160 cn1=3053998 dhost=SRV2019 duser=admin msg=Process accessed NTDS fname=ntdsutil.exe filePath=\\Device\\HarddiskVolume2\\Windows\\System32 cs5="C:\\Windows\\system32\\ntdsutil.exe" "ac i ntds" ifm "create full ntdstest3" q q fileHash=80c8b68240a95 dntdom=adminDomain cn3=13311 rt=1610948650000 tactic=Credential Access technique=Credential Dumping objective=Gain Access patternDisposition=Detection. outcome=0"
val ttype: String = "DCE"
type parseReturn = (String,String,List[String],Int)
def cefParser(a: String, ttype: String): parseReturn = {
val firstPart = a.split("\\|")
var pD = new ListBuffer[String]()
var listSize: Int = 0
if (firstPart.size == 8 && firstPart(4) == ttype) {
pD += firstPart(0)
pD += firstPart(1)
pD += firstPart(2)
pD += firstPart(3)
pD += firstPart(4)
pD += firstPart(5)
pD += firstPart(6)
val secondPart = parseSecondPart(firstPart(7), ttype)
pD ++= secondPart
listSize = pD.toList.length
(firstPart(2), ttype, pD.toList, listSize)
} else {
val temp: List[String] = List(a)
(firstPart(2), "IRRELEVANT", temp, temp.length)
}
}
The method parseSecondPart is:
def parseSecondPart(m:String, ttype:String): ListBuffer[String] = ttype match {
case auditActivity.ttype=>parseAuditEvent(m)
Another function call to just replace some text in the logs
def parseAuditEvent(msg: String): ListBuffer[String] = {
val updated_msg = msg.replace("cat=", "metadata_event_type=")
.replace("destinationtranslatedaddress=", "event_user_ip=")
.replace("duser=", "event_user_id=")
.replace("deviceprocessname=", "event_service_name=")
.replace("cn3=", "metadata_offset=")
.replace("outcome=", "event_success=")
.replace("devicecustomdate1=", "event_utc_timestamp=")
.replace("rt=", "metadata_event_creation_time=")
parseEvent(updated_msg)
}
Final function to get only the values:
def parseEvent(msg: String): ListBuffer[String] = {
val newMsg = msg.replace("\\=", "$_equal_$")
val pD = new ListBuffer[String]()
val splitData = newMsg.split("=")
val mSize = splitData.size
for (i <- 1 until mSize) {
if(i < mSize-1) {
val a = splitData(i).split(" ")
val b = a.size-1
val c = a.slice(0,b).mkString(" ")
pD += c.replace("$_equal_$","=")
} else if(i == mSize-1) {
val a = splitData(i).replace("$_equal_$","=")
pD += a
} else {
logExceptions(newMsg)
}
}
pD
}
The returns contains a ListBuffer[String]at 3rd position, using which I create a DataFrame as follows:
val df = ss.sqlContext
.createDataFrame(tempRDD.filter(x => x._1 != "IRRELEVANT")
.map(x => Row.fromSeq(x._3)), schema)
People of stackoverflow, i really need your help in improving my code, both for performance and approach.
Any kind of help and/or suggestions will be highly appreciated.
Thanks In Advance.

How to find Sum at Each partition in Spark

I have created class and used that class to create RDD. I want to calculate sum of LoudnessRate (member of class) at each partition. This sum will be later used to calculate Mean LoudnessRate at each partition.
I have tried following code but it does not calculate Sum and returns 0.0.
My code is
object sparkBAT {
def main(args: Array[String]): Unit = {
val numPartitions = 3
val N = 50
val d = 5
val MinVal = -10
val MaxVal = 10
val conf = new SparkConf().setMaster(locally("local")).setAppName("spark Sum")
val sc = new SparkContext(conf)
val ba = List.fill(N)(new BAT(d, MinVal, MaxVal))
val rdd = sc.parallelize(ba, numPartitions)
var arrSum =Array.fill(numPartitions)(0.0) // Declare Array that will hold sum for each Partition
rdd.mapPartitionsWithIndex((k,iterator) => iterator.map(x => arrSum(k) += x.LoudnessRate)).collect()
arrSum foreach println
}
}
class BAT (dim:Int, min:Double, max:Double) extends Serializable {
val random = new Random()
var position : List[Double] = List.fill(dim) (random.nextDouble() * (max-min)+min )
var velocity :List[Double] = List.fill(dim)( math.random)
var PulseRate : Double = 0.1
var LoudnessRate :Double = 0.95
var frequency :Double = math.random
var fitness :Double = math.random
var BestPosition :List[Double] = List.fill(dim)(math.random)
var BestFitness :Double = math.random
}
Changing my comment to an answer as requested. Original comment
You are modifying arrSum in executor JVMs and printing its values in the dirver JVM. You can map the iterators to singleton iterators and use collect to move the values to the driver. Also, don't use iterator.map for side-effects, iterator.foreach is meant for that.
And here is a sample snippet how to do it. First creating a RDD with two partitions, 0 -> 1,2,3 and 1 -> 4,5. Naturally you would not need this in actual code but as the sc.parallelize behaviour changes depending on environment, this will always create uniform RDDs to reproduce:
object DemoPartitioner extends Partitioner {
override def numPartitions: Int = 2
override def getPartition(key: Any): Int = key match {
case num: Int => num
}
}
val rdd = sc
.parallelize(Seq((0, 1), (0, 2), (0, 3), (1, 4), (1, 5)))
.partitionBy(DemoPartitioner)
.map(_._2)
And then the actual trick:
val sumsByPartition = rdd.mapPartitionsWithIndex {
case (partitionNum, it) => Iterator.single(partitionNum -> it.sum)
}.collect().toMap
println(sumsByPartition)
Outputs:
Map(0 -> 6, 1 -> 9)
The problem is that you're using arrSum (a regular collection) that is declared in your Driver and updated in the Executors. Whenever you're doing that you need to use Accumulators.
This should help

Evaluating multiple filters in a single pass

I have below rdd created and I need to perform a series of filters on the same dataset to derive different counters and aggregates.
Is there a way I can apply these filters and compute aggregates in a single pass, avoiding spark to go over the same dataset multiple times?
val res = df.rdd.map(row => {
// ............... Generate data here for each row.......
})
res.persist(StorageLevel.MEMORY_AND_DISK)
val all = res.count()
val stats1 = res.filter(row => row.getInt(1) > 0)
val stats1Count = stats1.count()
val stats1Agg = stats1.map(r => r.getInt(1)).mean()
val stats2 = res.filter(row => row.getInt(2) > 0)
val stats2Count = stats2.count()
val stats2Agg = stats2.map(r => r.getInt(2)).mean()
You can use aggregate:
case class Stats(count: Int = 0, sum: Int = 0) {
def mean = sum/count
def +(s: Stats): Stats = Stats(count + s.count, sum + s.sum)
def <- (n: Int) = if(n > 0) copy(count + 1, sum + n) else this
}
val (stats1, stats2) = res.aggregate(Stats() -> Stats()) (
{ (s, row) => (s._1 <- row.getInt(1), s._2 <- row.getInt(2)) },
{ _ + _ }
)
val (stat1Count, stats1Agg, stats2Count, stats2Agg) = (stats1.count, stats1.mean, stats2.count, stats2.mean)

Scala: Haar Wavelet Transform

I am trying to implement Haar Wavelet Transform in Scala. I am using this Python Code for reference Github Link to Python implementation of HWT
I am also giving here my Scala code version. I am new to Scala so forgive me for not-so-good-code.
/**
* Created by vipul vaibhaw on 1/11/2017.
*/
import scala.collection.mutable.{ListBuffer, MutableList,ArrayBuffer}
object HaarWavelet {
def main(args: Array[String]): Unit = {
var samples = ListBuffer(
ListBuffer(1,4),
ListBuffer(6,1),
ListBuffer(0,2,4,6,7,7,7,7),
ListBuffer(1,2,3,4),
ListBuffer(7,5,1,6,3,0,2,4),
ListBuffer(3,2,3,7,5,5,1,1,0,2,5,1,2,0,1,2,0,2,1,0,0,2,1,2,0,2,1,0,0,2,1,2)
)
for (i <- 0 to samples.length){
var ubound = samples(i).max+1
var length = samples(i).length
var deltas1 = encode(samples(i), ubound)
var deltas = deltas1._1
var avg = deltas1._2
println( "Input: %s, boundary = %s, length = %s" format(samples(i), ubound, length))
println( "Haar output:%s, average = %s" format(deltas, avg))
println("Decoded: %s" format(decode(deltas, avg, ubound)))
}
}
def wrap(value:Int, ubound:Int):Int = {
(value+ubound)%ubound
}
def encode(lst1:ListBuffer[Int], ubound:Int):(ListBuffer[Int],Int)={
//var lst = ListBuffer[Int]()
//lst1.foreach(x=>lst+=x)
var lst = lst1
var deltas = new ListBuffer[Int]()
var avg = 0
while (lst.length>=2) {
var avgs = new ListBuffer[Int]()
while (lst.nonEmpty) {
// getting first two element from the list and removing them
val a = lst.head
lst -= 1 // removing index 0 element from the list
val b = lst.head
lst -= 1 // removing index 0 element from the list
if (a<=b) {
avg = (a + b)/2
}
else{
avg = (a+b+ubound)/2
}
var delta = wrap(b-a,ubound)
avgs += avg
deltas += delta
}
lst = avgs
}
(deltas, avg%ubound)
}
def decode(deltas:ListBuffer[Int],avg:Int,ubound:Int):ListBuffer[Int]={
var avgs = ListBuffer[Int](avg)
var l = 1
while(deltas.nonEmpty){
for(i <- 0 to l ){
val delta = deltas.last
deltas -= -1
val avg = avgs.last
avgs -= -1
val a = wrap(math.ceil(avg-delta/2.0).toInt,ubound)
val b = wrap(math.ceil(avg+delta/2.0).toInt,ubound)
}
l*=2
}
avgs
}
def is_pow2(n:Int):Boolean={
(n & -n) == n
}
}
But Code gets stuck at "var deltas1 = encode(samples(i), ubound)" and doesn't give any output. How can I improve my implementation? Thanks in advance!
Your error is on this line:
lst -= 1 // removing index 0 element from the list.
This doesn't remove index 0 from the list. It removes the element 1 (if it exists). This means that the list never becomes empty. The while-loop while (lst.nonEmpty) will therefore never terminate.
To remove the first element of the list, simply use lst.remove(0).

Task not serializable Flink

I am trying to do the pagerank Basic example in flink with little bit of modification(only in reading the input file, everything else is the same) i am getting the error as Task not serializable and below is the part of the output error
atorg.apache.flink.api.scala.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:179)
at org.apache.flink.api.scala.ClosureCleaner$.clean(ClosureCleaner.scala:171)
Below is my code
object hpdb {
def main(args: Array[String]) {
val env = ExecutionEnvironment.getExecutionEnvironment
val maxIterations = 10000
val DAMPENING_FACTOR: Double = 0.85
val EPSILON: Double = 0.0001
val outpath = "/home/vinoth/bigdata/assign10/pagerank.csv"
val links = env.readCsvFile[Tuple2[Long,Long]]("/home/vinoth/bigdata/assign10/ppi.csv",
fieldDelimiter = "\t", includedFields = Array(1,4)).as('sourceId,'targetId).toDataSet[Link]//source and target
val pages = env.readCsvFile[Tuple1[Long]]("/home/vinoth/bigdata/assign10/ppi.csv",
fieldDelimiter = "\t", includedFields = Array(1)).as('pageId).toDataSet[Id]//Pageid
val noOfPages = pages.count()
val pagesWithRanks = pages.map(p => Page(p.pageId, 1.0 / noOfPages))
val adjacencyLists = links
// initialize lists ._1 is the source id and ._2 is the traget id
.map(e => AdjacencyList(e.sourceId, Array(e.targetId)))
// concatenate lists
.groupBy("sourceId").reduce {
(l1, l2) => AdjacencyList(l1.sourceId, l1.targetIds ++ l2.targetIds)
}
// start iteration
val finalRanks = pagesWithRanks.iterateWithTermination(maxIterations) {
// **//the output shows error here**
currentRanks =>
val newRanks = currentRanks
// distribute ranks to target pages
.join(adjacencyLists).where("pageId").equalTo("sourceId") {
(page, adjacent, out: Collector[Page]) =>
for (targetId <- adjacent.targetIds) {
out.collect(Page(targetId, page.rank / adjacent.targetIds.length))
}
}
// collect ranks and sum them up
.groupBy("pageId").aggregate(SUM, "rank")
// apply dampening factor
//**//the output shows error here**
.map { p =>
Page(p.pageId, (p.rank * DAMPENING_FACTOR) + ((1 - DAMPENING_FACTOR) / pages.count()))
}
// terminate if no rank update was significant
val termination = currentRanks.join(newRanks).where("pageId").equalTo("pageId") {
(current, next, out: Collector[Int]) =>
// check for significant update
if (math.abs(current.rank - next.rank) > EPSILON) out.collect(1)
}
(newRanks, termination)
}
val result = finalRanks
// emit result
result.writeAsCsv(outpath, "\n", " ")
env.execute()
}
}
Any help in the right direction is highly appreciated? Thank you.
The problem is that you reference the DataSet pages from within a MapFunction. This is not possible, since a DataSet is only the logical representation of a data flow and cannot be accessed at runtime.
What you have to do to solve this problem is to assign the val pagesCount = pages.count value to a variable pagesCount and refer to this variable in your MapFunction.
What pages.count actually does, is to trigger the execution of the data flow graph, so that the number of elements in pages can be counted. The result is then returned to your program.