spark cogroup/join KeyValueGroupedDataset with Dataset - scala

I have 2 datasets.
First has a number of rows with unique keys
ds1
key val1 val2
1 a 1
2 a 2
3 b 3
4 c 3
In the second same key can be encountered many times.
ds2
key val1 val2
1 x x
1 x g
2 u h
5 i j
I need to join them, but the logic inside is too complicated for a simple join so instead I decided to use cogroup and iterate over the data.
val ds1 = df1.as[ds1].groupByKey(_.key)
val ds2 = df2.as[ds2].groupByKey(_.key)
ds2.cogroup(ds1)(
(k:String, ds2:Iterator[ds2], ds1:Iterator[ds1]) => {
//some logic
}
)
The problem is I don't actually need to group ds1, because I know it holds unique keys, but cogroup doesn't accept the ds overwise. I know there is the fullOuterJoin in the RDD class, but it has worse performance as far as I know.
val rdd1 = df1.as[ds1].rdd.map(x => (x.key, x))
val rdd2 = df2.as[ds2].rdd.groupBy(_.key)
rdd2.fullOuterJoin(rdd1)
Would it actually affect the performance? What alternatives are there if so?
I'm using spark 2.2.

In Spark performance majorly depend on how much data you are processing because always remember spark is a computational engine. The better you provide the data to the executor the better the performance would be.
Join is for simple queries where as co-group is for grouping the two data frames. There are different ways of improving the performance but in your case you can create two different data frames and then make a simple join[If your dataframe is big enough] . Although co-group is performing grouping in the same executor due to which its performance is always better.

Related

Spark, applying filters on DataFrame(or RDD) multiple times without redundant evaluations

I have a Spark DataFrame that needs heavy evaluations for the chaining of the parent RDDs.
val df: DataFrame[(String, Any)] = someMethodCalculatingDF()
val out1 = df.filter(_._1 == "Key1").map(_._2).collect()
val out2 = df.filter(_._1 == "Key2").map(_._2)
out1 is a very small data ( one or two Rows in each partition) and collected for further use.
out2 is a Dataframe and will be used to generate another RDD that will be materialized later.
So, df will be evaluated twice, which is heavy.
Caching could be a solution, but in my application, it wont be, because the data could be really really BIG. The memory would be overflowed.
Is there any genius :) who could suggest another way bypassing the redundant evaluations?
It's actually a scenario which occurs in our cluster on a daily basis. From our experience this methodology work for us the best.
When we need to use same calculated dataframe twice(on different branches) we do as follows:
Calculation phase is heavy and resulted in a rather small dataframe -> cache it.
Calculation phase is light resulted in a big dataframe -> let it calculate twice.
Calculation is heavy resulted in a big data frame -> write it to disk(HDFS or S3) split the job on splitting point to two different batch processing job. In this you don't repeat the heavy calculation and you don't shred your cache(which will either way probably use the disk).
Calculation phase is light resulting in a small Dataframe. Your life is good and you can go home :).
I'm not familiar with dataset API so will write a solution using RDD api.
val rdd: RDD[(String, Int)] = ???
//First way
val both: Map[String, Iterable[Int]] = rdd.filter(e => e._1 == "Key1" || e._1 == "Key2")
.groupByKey().collectAsMap()
//Second way
val smallCached = rdd.filter(e => e._1 == "Key1" || e._1 == "Key2").cache()
val out1 = smallCached.filter(_._1 == "Key1").map(_._2).collect()
val out2 = smallCached.filter(_._1 == "Key2").map(_._2).collect()

Vector vs Vectors in spark [duplicate]

This question already has an answer here:
Difference between spark Vectors and scala immutable Vector?
(1 answer)
Closed 4 years ago.
i am a newbie in Spark.
I am trying to read a text file that has data like:
timestamp id counter value
00:01 1 c1 0.5
00:02 5 c3 0.3
00:03 1 c2 0.1
00:04 2 c2 0.13
and transform them to:
(id, array_of_counters):
(1, [ c1 c2 ])
[ 0.5 0.1]
So, for every id, i create an 2d array, which will have every counter and every value for that specific id in the text file.
I tried to do it with Vectors but i think that what is stored in them, must be double and that i cannot add two vectors, except the case they are breeze Vectors.
Then, i found out there is a data structure called just Vector but i can't find any details about it.
So, my question is what are the main differences between Vector and Vectors in mllib?
Code:
val inputRdd = sc.textFile(inputFile).map(x => x.split(","))
val data = inputRdd.map(y => (y(1), Vector(y(2), y(3)))).reduceByKey(_++_)
I don't think a Vector is necessary or appropriate for what it appears you are trying to do here (I could be wrong, we need more specifics on what you want to accomplish). The only way it makes sense is if there is a fixed number of counters (c1, c2, etc...) for each id. If you simply want a set of every id, with it's corresponding list of counters and values, try this (I'm assuming counters are unique to each id):
val data = inputRdd
.map(y => (y(1).toLong, y(2), y(3).toDouble))
.toDF("id", "counter", "value")
.groupBy("id")
.agg(collect_list(map($"counter", $"value")))
.as[(Long, Seq[Map[String, Double]])]
.map(r => (r._1, r._2.reduce(_++_)))
//this results in a Dataset[(Long, Map[String, Double])]
A spark ml.linalg.Vector is basically an Array[Double], and would require a fixed number of counter for every record. You could tranform from the data above into a vector by ordering the Map[String, Double] by it's ._1 and creating a Vector from it's .values.
ml.linalg.Vectors is just a helper object with functions for creating Vector objects.
Factory methods for org.apache.spark.ml.linalg.Vector. We don't use the name Vector because Scala imports scala.collection.immutable.Vector by default.
It's also worth noting that mllib is intended for the older RDD API while ml is intended for the newer Dataframe/Dataset API.
Edit: RDD[(Long, Seq[(String, Double)])]
val data = inputRdd
.map(y => (y(1).toLong, Seq[(String, Double)]((y(2), y(3).toDouble))))
.reduceByKey(_++_)

Performance of BucketedRandomProjectionLSH (org.apache.spark.ml.feature.BucketedRandomProjectionLSH)

Hi I am using the BucketedRandomProjectionLSH (2 buckets 3 hash tables) algorithm to find similar people in a dataset of ~300,000 records. I am creating a sparse vector of bigrams for each record (1296 dimensions in each vector) and doing an approximate similarity self join on the dataset which as I mentioned is not too large.
On an 3 node spark cluster (Master:m3.xlarge, Core:2 m4.4xlarge), it takes ~7 hours to complete.
The performance is too slow and I am looking for some benchmarks that someone may have created for this algorithm. Additionally, any guidance on how to tune this algorithm will be really helpful.
Here is the code snippet for your reference:
val rdd=sc.loadFromMongoDB(ReadConfig(Map("uri" -> "mongodb://localhost:27017/Single.master","readPreference.name" -> "secondaryPreferred")))
val aggregatedRdd = rdd.withPipeline(Seq(Document.parse("{$unwind:'$sources'}"),Document.parse("{$project:{_id:0,id:'$sources._id',val:{$toLower:{$concat:['$sources.first_name','$sources.middle_name','$sources.last_name',{$substr:['$sources.gender',0,1]},'$sources.dob','$sources.address.street','$sources.address.city','$sources.address.state','$sources.address.zip','$sources.phone','$sources.email']}}}}")))
val fDF=aggregatedRdd.map(line=>line.values()).map(ll=>bigramMap(ll.toArray)).toDF("id","idx","keys")
val columnNames = Seq("idx","keys")
val result = fDF.select(columnNames.head, columnNames.tail: _*)
val brp = new BucketedRandomProjectionLSH().setBucketLength(2).setNumHashTables(3).setInputCol("keys").setOutputCol("values")
val model = brp.fit(result)
var outDD=model.approxSimilarityJoin(result, result, 100).filter("datasetA.idx < datasetB.idx").select(col("datasetA.idx").alias("idA"),col("datasetB.idx").alias("idB"),col("distCol"))
I tried BucketedRandomProjectionLSH to 10,000,000 data.
It takes 3hours.
I only stored Dataframe's cash before.
df.persist()

Transforming Spark Dataframe Column

I am working with Spark dataframes. I have a categorical variable in my dataframe with many levels. I am attempting a simple transformation of this variable - Only pick the top few levels which has greater than n observations (say,1000). Club all other levels into an "Others" category.
I am fairly new to Spark, so I have been struggling to implement this. This is what I have been able to achieve so far:
# Extract all levels having > 1000 observations (df is the dataframe name)
val levels_count = df.groupBy("Col_name").count.filter("count >10000").sort(desc("count"))
# Extract the level names
val level_names = level_count.select("Col_name").rdd.map(x => x(0)).collect
This gives me an Array which has the level names that I would like to retain. Next, I should define the transformation function which can be applied to the column. This is where I am getting stuck. I believe we need to create a User defined function. This is what I tried:
# Define UDF
val var_transform = udf((x: String) => {
if (level_names contains x) x
else "others"
})
# Apply UDF to the column
val df_new = df.withColumn("Var_new", var_transform($"Col_name"))
However, when I try df_new.show it throws a "Task not serializable" exception. What am I doing wrong? Also, is there a better way to do this?
Thanks!
Here is a solution that would be, in my opinion, better for such a simple transformation: stick to the DataFrame API and trust catalyst and Tungsten to be optimised (e.g. making a broadcast join):
val levels_count = df
.groupBy($"Col_name".as("new_col_name"))
.count
.filter("count >10000")
val df_new = df
.join(levels_count,$"Col_name"===$"new_col_name", joinType="leftOuter")
.drop("Col_name")
.withColumn("new_col_name",coalesce($"new_col_name", lit("other")))

Can only zip RDDs with same number of elements in each partition despite repartition

I load a dataset
val data = sc.textFile("/home/kybe/Documents/datasets/img.csv",defp)
I want to put an index on this data thus
val nb = data.count.toInt
val tozip = sc.parallelize(1 to nb).repartition(data.getNumPartitions)
val res = tozip.zip(data)
Unfortunately i have the following error
Can only zip RDDs with same number of elements in each partition
How can i modify the number of element by partition if it is possible ?
Why it doesn't work?
The documentation for zip() states:
Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc. Assumes that the two RDDs have the same number of partitions and the same number of elements in each partition (e.g. one was made through a map on the other).
So we need to make sure we meet 2 conditions:
both RDDs have the same number of partitions
respective partitions in those RDDs have exactly the same size
You are making sure that you will have the same number of partitions with repartition() but Spark doesn't guarantee that you will have the same distribution in each partition for each RDD.
Why is that?
Because there are different types of RDDs and most of them have different partitioning strategies! For example:
ParallelCollectionRDD is created when you parallelise a collection with sc.parallelize(collection) it will see how many partitions there should be, will check the size of the collection and calculate the step size. I.e. you have 15 elements in the list and want 4 partitions, first 3 will have 4 consecutive elements last one will have the remaining 3.
HadoopRDD if I remember correctly, one partition per file block. Even though you are using a local file internally Spark first creates a this kind of RDD when you read a local file and then maps that RDD since that RDD is a pair RDD of <Long, Text> and you just want String :-)
etc.etc.
In your example Spark internally does create different types of RDDs (CoalescedRDD and ShuffledRDD) while doing the repartitioning but I think you got the global idea that different RDDs have different partitioning strategies :-)
Notice that the last part of the zip() doc mentions the map() operation. This operation does not repartition as it's a narrow transformation data so it would guarantee both conditions.
Solution
In this simple example as it was mentioned you can do simply data.zipWithIndex. If you need something more complicated then creating the new RDD for zip() should be created with map() as mentioned above.
I solved this by creating an implicit helper like so
implicit class RichContext[T](rdd: RDD[T]) {
def zipShuffle[A](other: RDD[A])(implicit kt: ClassTag[T], vt: ClassTag[A]): RDD[(T, A)] = {
val otherKeyd: RDD[(Long, A)] = other.zipWithIndex().map { case (n, i) => i -> n }
val thisKeyed: RDD[(Long, T)] = rdd.zipWithIndex().map { case (n, i) => i -> n }
val joined = new PairRDDFunctions(thisKeyed).join(otherKeyd).map(_._2)
joined
}
}
Which can then be used like
val rdd1 = sc.parallelize(Seq(1,2,3))
val rdd2 = sc.parallelize(Seq(2,4,6))
val zipped = rdd1.zipShuffle(rdd2) // Seq((1,2),(2,4),(3,6))
NB: Keep in mind that the join will cause a shuffle.
The following provides a Python answer to this problem by defining a custom_zip method:
Can only zip with RDD which has the same number of partitions error