Transforming RDD[String] to RDD[myclass] - scala

I am trying to transform RDD[String] to RDD[Picture] but could not do it. If I could manage to convert RDD to RDD[Picture] I would use the def hasValidCountry to check if the values latitude and longitude of the picture meta valid. And after that I am trying to check if user Tags are valid with def hasTags in Picture class. The problem I encounter :
Implicit conversion found: row ⇒ augmentString(row): scala.collection.immutable.StringOps
type mismatch; found : String required: Array[String]
value InterestingPics is not a member of Array[Nothing] possible cause: maybe a semicolon is missing before `value InterestingPics'?
My intention is to choose line which has valid country and tags and transform all the line to new RDD[Picture] class.
ScalaFile1 (I have updated the ScalaFile):
object Part2 {
def main(args: Array[String]): Unit = {
var spark: SparkSession = null
try {
spark = SparkSession.builder().appName("Flickr using dataframes").config("spark.master", "local[*]").getOrCreate()
val originalFlickrMeta: RDD[String] = spark.sparkContext.textFile("flickrSample.txt")
val InterestingPics = originalFlickrMeta.map(row => row.split('\t')).map(field => Picture(field(0).toString())
InterestingPics.collect
InterestingPics.take(5).foreach(println)

This works, as an example:
case class case_for_rdd(c1: Int, c2: String, c3: String)
val rdd_data = spark.sparkContext.textFile("/FileStore/tables/csv01-4.txt")
val rdd = rdd_data.map(row => row.split(',')).map(field => case_for_rdd(field(0).toInt, field(1), field(2)))
rdd.collect
More complicated example with reading into RDD from file with array. Array needs a delimiter.
1,10,100,aa|bb|cc
2,20,200,xxxxxx|yyyyyyyy|z|aaa
Some sample code, but use List as otherwise you get to see array addresses, that's what those strange strings are, courtesy of smarter
people here:
case class case_for_rdd(c1: Int, c2: String, c3: String, a4: List[String])
val rdd_data = spark.sparkContext.textFile("/FileStore/tables/csv03.txt")
val myCaseRdd = rdd_data.map(row => row.split(',')).map(field => case_for_rdd(field(0).toInt, field(1), field(2), (field(3).split("\\|").toList)))
myCaseRdd.collect
My advice is to use a DF and the splitting stuff is then easier. Also, manipulation of the rdd via transformation, then the case class is lost. Array with DF api has no such issue.

I have an solution to my question in accordence with help of #thebluephantom. Thank you very much.
val InterestingPics = originalFlickrMeta.map(line => (new Picture(line.split("\t")))).filter(f => f.c != null && f.userTags.length > 0)
InterestingPics.collect().foreach(println)

Related

how to convert RDD[(String, Any)] to Array(Row)?

I've got a unstructured RDD with keys and values. The values is of RDD[Any] and the keys are currently Strings, RDD[String] and mainly contain Maps. I would like to make them of type Row so I can make a dataframe eventually. Here is my rdd :
removed
Most of the rdd follows a pattern except for the last 4 keys, how should this be dealt with ? Perhaps split them into their own rdd, especially for reverseDeltas ?
Thanks
Edit
This is what I've tired so far based on the first answer below.
case class MyData(`type`: List[String], libVersion: Double, id: BigInt)
object MyDataBuilder{
def apply(s: Any): MyData = {
// read the input data and convert that to the case class
s match {
case Array(x: List[String], y: Double, z: BigInt) => MyData(x, y, z)
case Array(a: BigInt, Array(x: List[String], y: Double, z: BigInt)) => MyData(x, y, z)
case _ => null
}
}
}
val parsedRdd: RDD[MyData] = rdd.map(x => MyDataBuilder(x))
how it doesn't see to match any of those cases, how can I match on Map in scala ? I keep getting nulls back when printing out parsedRdd
To convert the RDD to a dataframe you need to have fixed schema. If you define the schema for the RDD rest is simple.
something like
val rdd2:RDD[Array[String]] = rdd.map( x => getParsedRow(x))
val rddFinal:RDD[Row] = rdd2.map(x => Row.fromSeq(x))
Alternate
case class MyData(....) // all the fields of the Schema I want
object MyDataBuilder {
def apply(s:Any):MyData ={
// read the input data and convert that to the case class
}
}
val rddFinal:RDD[MyData] = rdd.map(x => MyDataBuilder(x))
import spark.implicits._
val myDF = rddFinal.toDF
there is a method for converting an rdd to dataframe
use it like below
val rdd = sc.textFile("/pathtologfile/logfile.txt")
val df = rdd.toDF()
no you have dataframe do what ever you want on it using sql queries like below
val textFile = sc.textFile("hdfs://...")
// Creates a DataFrame having a single column named "line"
val df = textFile.toDF("line")
val errors = df.filter(col("line").like("%ERROR%"))
// Counts all the errors
errors.count()
// Counts errors mentioning MySQL
errors.filter(col("line").like("%MySQL%")).count()
// Fetches the MySQL errors as an array of strings
errors.filter(col("line").like("%MySQL%")).collect()

SPARK SQL : How to convert List[List[Any]) to Data Frame

val list = List(List(1,"Ankita"),List(2,"Kunal"))
and now I want to convert it into the data frame -
val list = List(List(1,"Ankita"),List(2,"Kunal")).toDF("id","name")
but it throws an error -
java.lang.ClassNotFoundException: Scala.any
AFAIK, List[List[Any]] cannot be converted to DataFrame directly, It need to convert to some type (here I took example to Person) List[Person]
case class Person(id: Int, name: String)
val list = List(List(1,"Ankita"),List(2,"Kunal"))
val listDf = list.map(x => Person(x(0).asInstanceOf[Int], x(1).toString)).toDF("id","name")
Another way is per the comment of user8371915, Create list of pairs and convert to DataFrame
val listDf = list.map {
case List(id: Int, name: String) => (id, name) } toDF("id", "name")
Because List (inside List) can be arbitrary size and can not using implicit type conversion.
It can be converted if you change to List of Tuple.
val list = List((1,"Ankita"),(2,"Kunal")).toDF("id","name")

Return a tuple of Map and String from UDF

I am trying to construct a temporary column from an expensive UDF that I need to run on each row of my Dataset[Row]. Currently it looks something like:
val myUDF = udf((values: Array[Byte], schema: String) => {
val list = new MyDecoder(schema).decode(values)
val myMap = list.map(
(value: SomeStruct) => (value.field1, value.field2)
).toMap
val field3 = list.head.field3
return (myMap, field3)
})
val decoded = myDF.withColumn("decoded_tmp", myUDF(col("data"), lit(schema))
.withColumn("myMap", col("decoded_tmp._1"))
.withColumn("field3", col("decoded_tmp._2"))
.drop("decoded_tmp")
However, when I try to compile this, I get a type mismatch error:
type mismatch;
found : (scala.collection.immutable.Map[String,Double], String)
required: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]
How can I get around this, or will I have to have 2 expensive UDF functions, one to produce myMap and the other to produce the field3 column?

Create a RDD : too many fields => use case class for RDD

I have a dataset of intrusion which is labeled that I want to use to test different supervised machine learning techniques.
So here is a part of my code :
object parser_dataset {
val conf = new SparkConf()
.setMaster("local[2]")
.setAppName("kdd")
.set("spark.executor.memory", "8g")
conf.registerKryoClasses(Array(
classOf[Array[Any]],
classOf[Array[scala.Tuple3[Int, Int, Int]]],
classOf[String],
classOf[Any]
))
val context = new SparkContext(conf)
def load(file: String): RDD[(Int, String, String,String,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Double,Double,Double,Double,Double,Double,Double, Int, Int,Double, Double, Double, Double, Double, Double, Double, Double, String)] = {
val data = context.textFile(file)
val res = data.map(x => {
val s = x.split(",")
(s(0).toInt, s(1), s(2), s(3), s(4).toInt, s(5).toInt, s(6).toInt, s(7).toInt, s(8).toInt, s(9).toInt, s(10).toInt, s(11).toInt, s(12).toInt, s(13).toInt, s(14).toInt, s(15).toInt, s(16).toInt, s(17).toInt, s(18).toInt, s(19).toInt, s(20).toInt, s(21).toInt, s(22).toInt, s(23).toInt, s(24).toDouble, s(25).toDouble, s(26).toDouble, s(27).toDouble, s(28).toDouble, s(29).toDouble, s(30).toDouble, s(31).toInt, s(32).toInt, s(33).toDouble, s(34).toDouble, s(35).toDouble, s(36).toDouble, s(37).toDouble, s(38).toDouble, s(39).toDouble, s(40).toDouble, s(41))
})
.persist(StorageLevel.MEMORY_AND_DISK)
return res
}
def main(args: Array[String]) {
val data = this.load("/home/hvfd8529/Datasets/KDDCup99/kddcup.data_10_percent_corrected")
data1.collect.foreach(println)
data.distinct()
}
}
This is not my code, it was given to me and I just modified some parts (especially the RDD and splitting parts) and I'm a newbie at Scala and Spark :)
EDIT:
So I added case class above my load function, like this :
case class BasicFeatures(duration:Int, protocol_type:String, service:String, flag:String, src_bytes:Int, dst_bytes:Int, land:Int, wrong_fragment:Int, urgent:Int)
case class ContentFeatures(hot:Int, num_failed_logins:Int, logged_in:Int, num_compromised:Int, root_shell:Int, su_attempted:Int, num_root:Int, num_file_creations:Int, num_shells:Int, num_access_files:Int, num_outbound_cmds:Int, is_host_login:Int, is_guest_login:Int)
case class TrafficFeatures(count:Int, srv_count:Int, serror_rate:Double, srv_error_rate:Double, rerror_rate:Double, srv_rerror_rate:Double, same_srv_rate:Double, diff_srv_rate:Double, srv_diff_host_rate:Double, dst_host_count:Int, dst_host_srv_count:Int, dst_host_same_srv_rate:Double, dst_host_diff_srv_rate:Double, dst_host_same_src_port_rate:Double, dst_host_srv_diff_host_rate:Double, dst_host_serror_rate:Double, dst_host_srv_serror_rate:Double, dst_host_rerror_rate:Double, dst_host_srv_rerror_rate:Double, attack_type:String )
But now I am confused, how can I use these to solve my problem, because I still need a RDD in order to have one feature = one field
Here is my one line of my file I want to parse :
0,tcp,ftp_data,SF,491,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,150,25,0.17,0.03,0.17,0.00,0.00,0.00,0.05,0.00,normal,20
Max tuple size supported by Scala is 22.Scala function have limit of 22 Parameter. Hence you can not create tuple of size more that 22.

Spark: Using mapPartition with Scala

Lets say I am having the following dataframe:
var randomData = Seq(("a",8),("h",5),("f",3),("a",2),("b",8),("c",3)
val df = sc.parallelize(randomData,2).toDF()
and I am having this function which will be an input for the mapPartition:
def trialIterator(row:Iterator[(String,Int)]): Iterator[(String,Int)] =
row.toArray.tail.toIterator
And using the map partition:
df.mapPartition(trialIterator)
I am having the following error message:
Type mismatch, expected (Iterator[Row]) => Iterator[NotInferedR], actual: Iterator[(String,Int) => Iterator[(String,Int)]
I can understand that this is happening due to the input, output type of my function but how to solve this?
If you want to get strongly typed input don't use Dataset[Row] (DataFrame) but Dataset[T] where T in this particular scenario is (String, Int). Also don't convert to Array and don't call blindly tail without knowing if partition is empty:
def trialIterator(iter: Iterator[(String, Int)]) = iter.drop(1)
randomData
.toDS // org.apache.spark.sql.Dataset[(String, Int)]
.mapPartitions(trialIterator _)
or
randomData.toDF // org.apache.spark.sql.Dataset[Row]
.as[(String, Int)] // org.apache.spark.sql.Dataset[(String, Int)]
.mapPartitions(trialIterator _)
You expecting type Iterator[(String,Int)] while you should expect Iterator[Row]
def trialIterator(row:Iterator[Row]): Iterator[(String,Int)] = {
row.next()
row //seems to do the same thing w/o all the conversions
}