How to make Pattern and alert for the following - scala

I've this code which gives locationID and temp, I want a pattern that spits alert whenever the temp > THRESHOLD_TEMPERATURE
I've Tried :-
val pattern1: Pattern[Event,_] = Pattern.begin[Event]("first")
.subtype(Event.getClass)
.where( (evt -> evt.getTemp()) >= TEMPERATURE_THRESHOLD)
.within(Time.seconds(5))
val patternStream: PatternStream[Event] = CEP.pattern(f,pattern1)
val alerts: DataStream[String] = patternStream.flatSelect(
(in: Map[String,String], out: Collector[String]) => {
var first: String = in.get("first")
if (first >= TEMPERATURE_THRESHOLD){
out.collect("Temperature above danger zone")
}
}
)
This is the code for which alert is to be made :-
case class Event(locationID: String, temp: Double)
val TEMPERATURE_THRESHOLD: Double = 50.00
val see: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
val properties = new Properties()
properties.setProperty("zookeeper.connect", "localhost:2181")
properties.setProperty("bootstrap.servers", "localhost:9092")
val src = see.addSource(new FlinkKafkaConsumer010[ObjectNode]("broadcast",
new JSONKeyValueDeserializationSchema(false), properties))
var ask = src.map{
r => r.get("value")
}
var data = ask.map { v => {
val loc = v.get("locationID").asInstanceOf[String]
val temperature = v.get("temp").asDouble()
(loc, temperature)
}}
// data.print()
var f = data.keyBy(
v => v._2
)
f.print()
see.execute()
pattern is getting overloaded, also flatSelect

CEP is for detecting Patterns in sequences of events, which doesn't really fit this particular problem. Finding events where the temp > THRESHOLD doesn't require pattern matching -- a simple filter or flatmap will do the job. E.g., you could have a flatmap that transforms every event where the temp is too high to an alert, and ignores all other events.
As for your CEP-based solution, I don't understand what you are saying is going wrong with it. But a couple of things look wrong. The within clause won't do anything: since your patterns are only one event long, they don't have any duration. And you are keying the stream by the temperature, which seems odd.

Related

Reading multiple files with akka streams in scala

I'am trying to read multiple files with akka streams and put result in a list.
I can read one file with no problem. the return type is Future[Seq[String]]. problem is processing the sequence inside the Future must go inside an onComplete{}.
i'am trying the following code but abviously it will not work. the list acc outside of the onComplete is empty. but holds values inside the inComplete. I understand the problem but i don't know how to approach this.
// works fine
def readStream(path: String, date: String): Future[Seq[String]] = {
implicit val system = ActorSystem("Sys")
val settings = ActorMaterializerSettings(system)
implicit val materializer = ActorMaterializer(settings)
val result: Future[Seq[String]] =
FileIO.fromPath(Paths.get(path + "transactions_" + date +
".data"))
.via(Framing.delimiter(ByteString("\n"), 256, true))
.map(_.utf8String)
.toMat(Sink.seq)(Keep.right)
.run()
var aa: List[scala.Array[String]] = Nil
result.onComplete(x => {
aa = x.get.map(line => line.split('|')).toList
})
result
}
//this won't work
def concatFiles(path : String, date : String, numberOfDays : Int) :
List[scala.Array[String]] = {
val formatter = DateTimeFormatter.ofPattern("yyyyMMdd");
val formattedDate = LocalDate.parse(date, formatter);
var acc = List[scala.Array[String]]()
for( a <- 0 to numberOfDays){
val date = formattedDate.minusDays(a).toString().replace("-", "")
val transactions = readStream(path , date)
var result: List[scala.Array[String]] = Nil
transactions.onComplete(x => {
result = x.get.map(line => line.split('|')).toList
acc= acc ++ result })
}
acc}
General Solution
Given an Iterator of Paths values a Source of the file lines can be created by combining FileIO & flatMapConcat:
val lineSourceFromPaths : (() => Iterator[Path]) => Source[String, _] = pathsIterator =>
Source
.fromIterator(pathsIterator)
.flatMapConcat { path =>
FileIO
.fromPath(path)
.via(Framing.delimiter(ByteString("\n"), 256, true))
.map(_.utf8String)
}
Application to Question
The reason your List is empty is because the Future values have not completed and therefore your mutable list is not be updated before the function returns the list.
Critique of Code in Question
The organization and style of the code within the question suggest several misunderstandings related to akka & Future. I think you are attempting a rather complex workflow without understanding the fundamentals of the tools you are trying to use.
1.You should not create an ActorSystem each time a function is being called. There is usually 1 ActorSystem per application and it's created only once.
implicit val system = ActorSystem("Sys")
val settings = ActorMaterializerSettings(system)
implicit val materializer = ActorMaterializer(settings)
def readStream(...
2.You should try to avoid mutable collections and instead use Iterator with corresponding functionality:
def concatFiles(path : String, date : String, numberOfDays : Int) : List[scala.Array[String]] = {
val formattedDate = LocalDate.parse(date, DateTimeFormatter.ofPattern("yyyyMMdd"))
val pathsIterator : () => Iterator[Path] = () =>
Iterator
.range(0, numberOfDays+1)
.map(formattedDate.minusDays)
.map(_.String().replace("-", "")
.map(path => Paths.get(path + "transactions_" + date + ".data")
lineSourceFromPaths(pathsIterator)
3.Since you are dealing with Futures you should not wait for Futures to complete and should instead change the return type of concateFiles to Future[List[Array[String]]].

How can I construct a String with the contents of a given DataFrame in Scala

Consider I have a dataframe. How can I retrieve the contents of that dataframe and represent it as a string.
Consider I try to do that with the below example code.
val tvalues: Array[Double] = Array(1.866393526974307, 2.864048126935307, 4.032486069215076, 7.876169953355888, 4.875333799256043, 14.316322626848278)
val pvalues: Array[Double] = Array(0.064020056478447, 0.004808399479386827, 8.914865448939047E-5, 7.489564524121306E-13, 2.8363794106756046E-6, 0.0)
val conf = new SparkConf().setAppName("Simple Application").setMaster("local[2]");
val sc = new SparkContext(conf)
val df = sc.parallelize(tvalues zip pvalues)
val sb = StringBuilder.newBuilder
df.foreach(x => {
println("x = ", x)
sb.append(x)
})
println("sb = ", sb)
The output of the code shows the example dataframe has contents:
(x = ,(1.866393526974307,0.064020056478447))
(x = ,(7.876169953355888,7.489564524121306E-13))
(x = ,(2.864048126935307,0.004808399479386827))
(x = ,(4.032486069215076,8.914865448939047E-5))
(x = ,(4.875333799256043,2.8363794106756046E-6))
However, the final stringbuilder contains an empty string.
Any thoughts how to retrieve a String for a given dataframe in Scala?
Many thanks
UPD: as mentioned by #user8371915, solution below will work only in single JVM in development (local) mode. In fact we cant modify broadcast variables like globals. You can use accumulators, but it will be quite inefficient. Also you can read an answer about read/write global vars here. Hope it will help you.
I think you should read topic about shared variables in Spark. Link here
Normally, when a function passed to a Spark operation (such as map or reduce) is executed on a remote cluster node, it works on separate copies of all the variables used in the function. These variables are copied to each machine, and no updates to the variables on the remote machine are propagated back to the driver program. Supporting general, read-write shared variables across tasks would be inefficient. However, Spark does provide two limited types of shared variables for two common usage patterns: broadcast variables and accumulators.
Let's have a look at broadcast variables. I edited your code:
val tvalues: Array[Double] = Array(1.866393526974307, 2.864048126935307, 4.032486069215076, 7.876169953355888, 4.875333799256043, 14.316322626848278)
val pvalues: Array[Double] = Array(0.064020056478447, 0.004808399479386827, 8.914865448939047E-5, 7.489564524121306E-13, 2.8363794106756046E-6, 0.0)
val conf = new SparkConf().setAppName("Simple Application").setMaster("local[2]");
val sc = new SparkContext(conf)
val df = sc.parallelize(tvalues zip pvalues)
val sb = StringBuilder.newBuilder
val broadcastVar = sc.broadcast(sb)
df.foreach(x => {
println("x = ", x)
broadcastVar.value.append(x)
})
println("sb = ", broadcastVar.value)
Here I used broadcastVar as a container for a StringBuilder variable sb.
Here is output:
(x = ,(1.866393526974307,0.064020056478447))
(x = ,(2.864048126935307,0.004808399479386827))
(x = ,(4.032486069215076,8.914865448939047E-5))
(x = ,(7.876169953355888,7.489564524121306E-13))
(x = ,(4.875333799256043,2.8363794106756046E-6))
(x = ,(14.316322626848278,0.0))
(sb = ,(7.876169953355888,7.489564524121306E-13)(1.866393526974307,0.064020056478447)(4.875333799256043,2.8363794106756046E-6)(2.864048126935307,0.004808399479386827)(14.316322626848278,0.0)(4.032486069215076,8.914865448939047E-5))
Hope this helps.
Does the output of df.show(false) help? If yes, then this SO answer helps: Is there any way to get the output of Spark's Dataset.show() method as a string?
Thanks everybody for the feedback and for understanding this slightly better.
The combination of responses result in the below. The requirements have changed slightly in that I represent my df as a list of jsons. The code below does this, without the use of the broadcast.
class HandleDf(df: DataFrame, limit: Int) extends java.io.Serializable {
val jsons = df.limit(limit).collect.map(rowToJson(_))
def rowToJson(r: org.apache.spark.sql.Row) : JSONObject = {
try { JSONObject(r.getValuesMap(r.schema.fieldNames)) }
catch { case t: Throwable =>
JSONObject.apply(Map("Row with error" -> t.toString))
}
}
}
The class I use here...
val jsons = new HandleDf(df, 100).jsons

Materialize mapWithState stateSnapShots to database for later resume of spark streaming app

I have a Spark scala streaming app that sessionizes user generated events coming from Kafka, using mapWithState. I want to mature the setup by enabling to pauze and resume the app in the case of maintenance. I’m already writing kafka offset information to a database, so when restarting the app I can pick up at the last offset processed. But I also want to keep the state information.
So my goal is to;
materialize session information after a key identifying the user times out.
materialize a .stateSnapshot() when I gracefully shutdown the application, so I can use that data when restarting the app by feeding it as a parameter to StateSpec.
1 is working, with 2 I have issues.
For the sake of completeness, I also describe 1 because I’m always interested in a better solution for it:
1) materializing session info after key time out
Inside my update function for mapWithState, I have:
if (state.isTimingOut()) {
// if key is timing out.
val output = (key, stateFilterable(isTimingOut = true
, start = state.get().start
, end = state.get().end
, duration = state.get().duration
))
That isTimingOut boolean I then later on use as:
streamParsed
.filter(a => a._2.isTimingOut)
.foreachRDD(rdd =>
rdd
.map(stuff => Model(key = stuff._1,
start = stuff._2.start,
duration = stuff._2.duration)
.saveToCassandra(keyspaceName, tableName)
)
2) materialize a .stateSnapshot() with graceful shutdown
Materializing snapshot info doesn’t work. What is tried:
// define a class Listener
class Listener(ssc: StreamingContext, state: DStream[(String, stateFilterable)]) extends Runnable {
def run {
if( ssc == null )
System.out.println("The spark context is null")
else
System.out.println("The spark context is fine!!!")
var input = "continue"
while( !input.equals("D")) {
input = readLine("Press D to kill: ")
System.out.println(input + " " + input.equals("D"))
}
System.out.println("Accessing snapshot and saving:")
state.foreachRDD(rdd =>
rdd
.map(stuff => Model(key = stuff._1,
start = stuff._2.start,
duration = stuff._2.duration)
.saveToCassandra("some_keyspace", "some_table")
)
System.out.println("Stopping context!")
ssc.stop(true, true)
System.out.println("We have stopped!")
}
}
// Inside the app object:
val state = streamParsed.stateSnapshots()
var listener = new Thread(new Listener(ssc, state))
listener.start()
So the full code becomes:
package main.scala.cassandra_sessionizing
import java.text.SimpleDateFormat
import java.util.Calendar
import org.apache.spark.streaming.dstream.{DStream, MapWithStateDStream}
import scala.collection.immutable.Set
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.streaming._
import org.apache.spark.streaming.Duration
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.types.{StructType, StructField, StringType, DoubleType, LongType, ArrayType, IntegerType}
import _root_.kafka.serializer.StringDecoder
import com.datastax.spark.connector._
import com.datastax.spark.connector.cql.CassandraConnector
case class userAction(datetimestamp: Double
, action_name: String
, user_key: String
, page_id: Integer
)
case class actionTuple(pages: scala.collection.mutable.Set[Int]
, start: Double
, end: Double)
case class stateFilterable(isTimingOut: Boolean
, start: Double
, end: Double
, duration: Int
, pages: Set[Int]
, events: Int
)
case class Model(user_key: String
, start: Double
, duration: Int
, pages: Set[Int]
, events: Int
)
class Listener(ssc: StreamingContext, state: DStream[(String, stateFilterable)]) extends Runnable {
def run {
var input = "continue"
while( !input.equals("D")) {
input = readLine("Press D to kill: ")
System.out.println(input + " " + input.equals("D"))
}
// Accessing snapshot and saving:
state.foreachRDD(rdd =>
rdd
.map(stuff => Model(user_key = stuff._1,
start = stuff._2.start,
duration = stuff._2.duration,
pages = stuff._2.pages,
events = stuff._2.events))
.saveToCassandra("keyspace1", "snapshotstuff")
)
// Stopping context
ssc.stop(true, true)
}
}
object cassandra_sessionizing {
// where we'll store the stuff in Cassandra
val tableName = "sessionized_stuff"
val keyspaceName = "keyspace1"
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("cassandra-sessionizing")
.set("spark.cassandra.connection.host", "10.10.10.10")
.set("spark.cassandra.auth.username", "keyspace1")
.set("spark.cassandra.auth.password", "blabla")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
// setup the cassandra connector and recreate the table we'll use for storing the user session data.
val cc = CassandraConnector(conf)
cc.withSessionDo { session =>
session.execute(s"""DROP TABLE IF EXISTS $keyspaceName.$tableName;""")
session.execute(
s"""CREATE TABLE IF NOT EXISTS $keyspaceName.$tableName (
user_key TEXT
, start DOUBLE
, duration INT
, pages SET<INT>
, events INT
, PRIMARY KEY(user_key, start)) WITH CLUSTERING ORDER BY (start DESC)
;""")
}
// setup the streaming context and make sure we can checkpoint, given we're using mapWithState.
val ssc = new StreamingContext(sc, Seconds(60))
ssc.checkpoint("hdfs:///user/keyspace1/streaming_stuff/")
// Defining the stream connection to Kafka.
val kafkaStream = {
KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc,
Map("metadata.broker.list" -> "kafka1.prod.stuff.com:9092,kafka2.prod.stuff.com:9092"), Set("theTopic"))
}
// this schema definition is needed so the json string coming from Kafka can be parsed into a dataframe using spark read.json.
// if an event does not conform to this structure, it will result in all null values, which are filtered out later.
val struct = StructType(
StructField("datetimestamp", DoubleType, nullable = true) ::
StructField("sub_key", StructType(
StructField("user_key", StringType, nullable = true) ::
StructField("page_id", IntegerType, nullable = true) ::
StructField("name", StringType, nullable = true) :: Nil), nullable = true) ::
)
/*
this is the function needed to keep track of an user key's session.
3 options:
1) key already exists, and new values are coming in to be added to the state.
2) key is new, so initialize the state with the incoming value
3) key is timing out, so mark it with a boolean that can be used by filtering later on. Given the boolean, the data can be materialized to cassandra.
*/
def trackStateFunc(batchTime: Time
, key: String
, value: Option[actionTuple]
, state: State[stateFilterable])
: Option[(String, stateFilterable)] = {
// 1 : if key already exists and we have a new value for it
if (state.exists() && value.orNull != null) {
var current_set = state.getOption().get.pages
var current_start = state.getOption().get.start
var current_end = state.getOption().get.end
if (value.get.pages != null) {
current_set ++= value.get.pages
}
current_start = Array(current_start, value.get.start).min // the starting epoch is used to initialize the state, but maybe some earlier events are processed a bit later.
current_end = Array(current_end, value.get.end).max // always update the end time of the session with new events coming in.
val new_event_counter = state.getOption().get.events + 1
val new_output = stateFilterable(isTimingOut = false
, start = current_start
, end = current_end
, duration = (current_end - current_start).toInt
, pages = current_set
, events = new_event_counter)
val output = (key, new_output)
state.update(new_output)
return Some(output)
}
// 2: if key does not exist and we have a new value for it
else if (value.orNull != null) {
var new_set: Set[Int] = Set()
val current_value = value.get.pages
if (current_value != null) {
new_set ++= current_value
}
val event_counter = 1
val current_start = value.get.start
val current_end = value.get.end
val new_output = stateFilterable(isTimingOut = false
, start = current_start
, end = current_end
, duration = (current_end - current_start).toInt
, pages = new_set
, events = event_counter)
val output = (key, new_output)
state.update(new_output)
return Some(output)
}
// 3: if key is timing out
if (state.isTimingOut()) {
val output = (key, stateFilterable(isTimingOut = true
, start = state.get().start
, end = state.get().end
, duration = state.get().duration
, pages = state.get().pages
, events = state.get().events
))
return Some(output)
}
// this part of the function should never be reached.
throw new Error(s"Entered dead end with $key $value")
}
// defining the state specification used later on as a step in the stream pipeline.
val stateSpec = StateSpec.function(trackStateFunc _)
.numPartitions(16)
.timeout(Seconds(4000))
// RDD 1
val streamParsedRaw = kafkaStream
.map { case (k, v: String) => v } // key is empty, so get the value containing the json string.
.transform { rdd =>
val df = sqlContext.read.schema(struct).json(rdd) // apply schema defined above and parse the json into a dataframe,
.selectExpr("datetimestamp"
, "action.name AS action_name"
, "action.user_key"
, "action.page_id"
)
df.as[userAction].rdd // transform dataframe into spark Dataset so we easily cast to the case class userAction.
}
val initialCount = actionTuple(pages = collection.mutable.Set(), start = 0.0, end = 0.0)
val addToCounts = (left: actionTuple, ua: userAction) => {
val current_start = ua.datetimestamp
val current_end = ua.datetimestamp
if (ua.page_id != null) left.pages += ua.page_id
actionTuple(left.pages, current_start, current_end)
}
val sumPartitionCounts = (p1: actionTuple, p2: actionTuple) => {
val current_start = Array(p1.start, p2.start).min
val current_end = Array(p1.end, p2.end).max
actionTuple(p1.pages ++= p2.pages, current_start, current_end)
}
// RDD 2: add the mapWithState part.
val streamParsed = streamParsedRaw
.map(s => (s.user_key, s)) // create key value tuple so we can apply the mapWithState to the user_key.
.transform(rdd => rdd.aggregateByKey(initialCount)(addToCounts, sumPartitionCounts)) // reduce to one row per user key for each batch.
.mapWithState(stateSpec)
// RDD 3: if the app is shutdown, this rdd should be materialized.
val state = streamParsed.stateSnapshots()
state.print(2)
// RDD 4: Crucial: loop up sessions timing out, extract the fields that we want to keep and materialize in Cassandra.
streamParsed
.filter(a => a._2.isTimingOut)
.foreachRDD(rdd =>
rdd
.map(stuff => Model(user_key = stuff._1,
start = stuff._2.start,
duration = stuff._2.duration,
pages = stuff._2.pages,
events = stuff._2.events))
.saveToCassandra(keyspaceName, tableName)
)
// add a listener hook that we can use to gracefully shutdown the app and materialize the RDD containing the state snapshots.
var listener = new Thread(new Listener(ssc, state))
listener.start()
ssc.start()
ssc.awaitTermination()
}
}
But when running this (so launching the app, waiting several minutes for some state information to build up, and then entering key 'D', I get the below. So I can't do anything 'new' with a dstream after quitting the ssc. I hoped to move from a DStream RDD to a regular RDD, quit the streaming context, and wrap up by saving the normal RDD. But don't know how. Hope someone can help!
Exception in thread "Thread-52" java.lang.IllegalStateException: Adding new inputs, transformations, and output operations after sta$
ting a context is not supported
at org.apache.spark.streaming.dstream.DStream.validateAtInit(DStream.scala:222)
at org.apache.spark.streaming.dstream.DStream.<init>(DStream.scala:64)
at org.apache.spark.streaming.dstream.ForEachDStream.<init>(ForEachDStream.scala:34)
at org.apache.spark.streaming.dstream.DStream.org$apache$spark$streaming$dstream$DStream$$foreachRDD(DStream.scala:687)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1.apply$mcV$sp(DStream.scala:661)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1.apply(DStream.scala:659)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1.apply(DStream.scala:659)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
at org.apache.spark.streaming.StreamingContext.withScope(StreamingContext.scala:260)
at org.apache.spark.streaming.dstream.DStream.foreachRDD(DStream.scala:659)
at main.scala.feaUS.Listener.run(feaUS.scala:119)
at java.lang.Thread.run(Thread.java:745)
There are 2 main changes to the code which should make it work
1> Use the checkpointed directory to start the spark streaming context.
val ssc = StreamingContext.getOrCreate(checkpointDirectory,
() => createContext(checkpointDirectory));
where createContext method has the logic to create and define new streams and stores the check pointed date in checkpointDirectory.
2> The sql context needs to be constructed in a slightly different way.
val streamParsedRaw = kafkaStream
.map { case (k, v: String) => v } // key is empty, so get the value containing the json string.
.map(s => s.replaceAll("""(\"hotel_id\")\:\"([0-9]+)\"""", "\"hotel_id\":$2")) // some events contain the hotel_id in quotes, making it a string. remove these quotes.
.transform { rdd =>
val sqlContext = SQLContext.getOrCreate(rdd.sparkContext)
import sqlContext.implicits._
val df = sqlContext.read.schema(struct).json(rdd) // apply schema defined above and parse the json into a dataframe,
.selectExpr("__created_epoch__ AS created_epoch" // the parsed json dataframe needs a bit of type cleaning and name changing
I feel your pain! While checkpointing is useful, it does not actually work if the code changes, and we change the code frequently!
What we are doing is to save the state, as json, every cycle, to hbase. So, if snapshotStream is your stream with the state info, we simply save it, as json, to hbase each window. While expensive, it is the only way we can guarantee the state is available upon restart even if the code changes.
Upon startup we load it, deserialize it, and pass it to the stateSpec as the initial rdd.

Task not serializable Flink

I am trying to do the pagerank Basic example in flink with little bit of modification(only in reading the input file, everything else is the same) i am getting the error as Task not serializable and below is the part of the output error
atorg.apache.flink.api.scala.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:179)
at org.apache.flink.api.scala.ClosureCleaner$.clean(ClosureCleaner.scala:171)
Below is my code
object hpdb {
def main(args: Array[String]) {
val env = ExecutionEnvironment.getExecutionEnvironment
val maxIterations = 10000
val DAMPENING_FACTOR: Double = 0.85
val EPSILON: Double = 0.0001
val outpath = "/home/vinoth/bigdata/assign10/pagerank.csv"
val links = env.readCsvFile[Tuple2[Long,Long]]("/home/vinoth/bigdata/assign10/ppi.csv",
fieldDelimiter = "\t", includedFields = Array(1,4)).as('sourceId,'targetId).toDataSet[Link]//source and target
val pages = env.readCsvFile[Tuple1[Long]]("/home/vinoth/bigdata/assign10/ppi.csv",
fieldDelimiter = "\t", includedFields = Array(1)).as('pageId).toDataSet[Id]//Pageid
val noOfPages = pages.count()
val pagesWithRanks = pages.map(p => Page(p.pageId, 1.0 / noOfPages))
val adjacencyLists = links
// initialize lists ._1 is the source id and ._2 is the traget id
.map(e => AdjacencyList(e.sourceId, Array(e.targetId)))
// concatenate lists
.groupBy("sourceId").reduce {
(l1, l2) => AdjacencyList(l1.sourceId, l1.targetIds ++ l2.targetIds)
}
// start iteration
val finalRanks = pagesWithRanks.iterateWithTermination(maxIterations) {
// **//the output shows error here**
currentRanks =>
val newRanks = currentRanks
// distribute ranks to target pages
.join(adjacencyLists).where("pageId").equalTo("sourceId") {
(page, adjacent, out: Collector[Page]) =>
for (targetId <- adjacent.targetIds) {
out.collect(Page(targetId, page.rank / adjacent.targetIds.length))
}
}
// collect ranks and sum them up
.groupBy("pageId").aggregate(SUM, "rank")
// apply dampening factor
//**//the output shows error here**
.map { p =>
Page(p.pageId, (p.rank * DAMPENING_FACTOR) + ((1 - DAMPENING_FACTOR) / pages.count()))
}
// terminate if no rank update was significant
val termination = currentRanks.join(newRanks).where("pageId").equalTo("pageId") {
(current, next, out: Collector[Int]) =>
// check for significant update
if (math.abs(current.rank - next.rank) > EPSILON) out.collect(1)
}
(newRanks, termination)
}
val result = finalRanks
// emit result
result.writeAsCsv(outpath, "\n", " ")
env.execute()
}
}
Any help in the right direction is highly appreciated? Thank you.
The problem is that you reference the DataSet pages from within a MapFunction. This is not possible, since a DataSet is only the logical representation of a data flow and cannot be accessed at runtime.
What you have to do to solve this problem is to assign the val pagesCount = pages.count value to a variable pagesCount and refer to this variable in your MapFunction.
What pages.count actually does, is to trigger the execution of the data flow graph, so that the number of elements in pages can be counted. The result is then returned to your program.

spark scala get uncommon map elements

I am trying to split my data set into train and test data sets. I first read the file into memory as shown here:
val ratings = sc.textFile(movieLensdataHome+"/ratings.csv").map { line=>
val fields = line.split(",")
Rating(fields(0).toInt,fields(1).toInt,fields(2).toDouble)
}
Then I select 80% of those for my training set:
val train = ratings.sample(false,.8,1)
Is there an easy way to get the test set in a distributed way,
I am trying this but fails:
val test = ratings.filter(!_.equals(train.map(_)))
val test = ratings.subtract(train)
Take a look here. http://markmail.org/message/qi6srcyka6lcxe7o
Here is the code
def split[T : ClassManifest](data: RDD[T], p: Double, seed: Long =
System.currentTimeMillis): (RDD[T], RDD[T]) = {
val rand = new java.util.Random(seed)
val partitionSeeds = data.partitions.map(partition => rand.nextLong)
val temp = data.mapPartitionsWithIndex((index, iter) => {
val partitionRand = new java.util.Random(partitionSeeds(index))
iter.map(x => (x, partitionRand.nextDouble))
})
(temp.filter(_._2 <= p).map(_._1), temp.filter(_._2 > p).map(_._1))
}
Instead of using an exclusion method (like filter or subtract), I'd partition the set "by hand" for a more efficient execution:
val probabilisticSegment:(RDD[Double,Rating],Double=>Boolean) => RDD[Rating] =
(rdd,prob) => rdd.filter{case (k,v) => prob(k)}.map {case (k,v) => v}
val ranRating = rating.map( x=> (Random.nextDouble(), x)).cache
val train = probabilisticSegment(ranRating, _ < 0.8)
val test = probabilisticSegment(ranRating, _ >= 0.8)
cache saves the intermediate RDD sothat the next two operations can be performed from that point on without incurring in the execution of the complete lineage.
(*) Note the use of val to define a function instead of def. vals are serializer-friendly