I have a mutable.MutableList[emp] with following structure.
case class emp(name: String,id:String,sal: Long,dept: String)
I am generating records based on above case class in the below mutable.MutableList[emp].
val list1: mutable.MutableList[emp] = ((mike, 1, 123, HR),(mike,2,123,sys),(Lind,1,2323,sys))
If I have same name with id 1 and 2, I need to take only 2 and drop id 1 record. Id id 2 is not present, I have to take id 1.
How do achieve this? I tried it with following way but results are not accurate:
0. converted mutable.mutableList to Dataframe
1. filtered records with id 1(id1s_DF)
2. filtered records with id 2(other_rec_DF)
3. joined records with name and used leftsemi as join condition.
val join_info_DF = other_rec_DF.join(id1s_DF, id1s_DF("name") =!= other_rec_DF("name"),"leftsemi")
Above join will give all the names which are present in other_rec_DS and not present in Other_rec_DF.
Looks like I am doing some thing wrong with the join and not getting expected results.
Could some please help me to achieve this in either mutableList or by converting it into Dataframe.
Thanks,
Babu
If the size of your data is small enough you don't need something like Apache Spark to do the above task.
Doing this in plain scala code, the code would look something like below
case class Emp(name: String,id:Int,sal: Long,dept: String)
val list1: mutable.MutableList[Emp] = mutable.MutableList(
Emp("mike", 1, 123, "HR"),
Emp("mike", 2, 123, "sys"),
Emp("Lind", 1, 2323, "sys")
)
val result = list1
.groupBy(_.name)
.mapValues(_.sortBy(_.id)(Ordering[Int].reverse).head)
.values
result.foreach(println)
The output of the above code would be
Emp(Lind,1,2323,sys)
Emp(mike,2,123,sys)
The idea / approach is to make sure we group by the key on which you want to de-duplicate the items, sort them and pick the one with the highest id. We then drop the key and store only the values.
The above approach would work exactly the same way on Spark as well.
Related
I am trying to perform a partition+broadcast join in spark scala. I have a dictionary that I am broadcasting to all the nodes. The structure of the dictionary is as follows:
{ key: Option[List[Strings]] } // I created this dictionary using a groupByKey first and then called collectAsMap before broadcasting.
The above dictionary was created using the table whose structure is similar to the table mentioned below.
I have a table that is a pair RDD whose structure is as follows:
Col A | Col B
I am trying to perform a join as follows:
val join_output = table.flatMap{
case(key, value) => custom_dictionary.value.get(key).map(
otherValue => otherValue.foreach((value, _))
)
}
My goal is to get a pair-RDD as an output whose contents are ( from table, from list stored in the dictionary).
The code runs and compiles successfully but when I check the output, I only see this: "()" as the output being saved. Where am I going wrong?
I did have a look at some of the other posts that did reflect up to some extent on this matter, but none of the options worked. I request some guidance on this issue. Also, if there is a post that exactly points to this, please let me know.
I'm new to scala, spark, and I have a problem while trying to learn from some toy dataframes.
I have a dataframe having the following two columns:
Name_Description Grade
Name_Description is an array, and Grade is just a letter. It's Name_Description that I'm having a problem with. I'm trying to change this column when using scala on Spark.
Name description is not an array that's of fixed size. It could be something like
['asdf_ Brandon', 'Ca%abc%rd']
['fthhhhChris', 'Rock', 'is the %abc%man']
The only problems are the following:
1. the first element of the array ALWAYS has 6 garbage characters, so the real meaning starts at 7th character.
2. %abc% randomly pops up on elements, so I wanna erase them.
Is there any way to achieve those two things in Scala? For instance, I just want
['asdf_ Brandon', 'Ca%abc%rd'], ['fthhhhChris', 'Rock', 'is the %abc%man']
to change to
['Brandon', 'Card'], ['Chris', 'Rock', 'is the man']
What you're trying to do might be hard to achieve using standard spark functions, but you could define UDF for that:
val removeGarbage = udf { arr: WrappedArray[String] =>
//in case that array is empty we need to map over option
arr.headOption
//drop first 6 characters from first element, then remove %abc% from the rest
.map(head => head.drop(6) +: arr.tail.map(_.replace("%abc%","")))
.getOrElse(arr)
}
Then you just need to use this UDF on your Name_Description column:
val df = List(
(1, Array("asdf_ Brandon", "Ca%abc%rd")),
(2, Array("fthhhhChris", "Rock", "is the %abc%man"))
).toDF("Grade", "Name_Description")
df.withColumn("Name_Description", removeGarbage($"Name_Description")).show(false)
Show prints:
+-----+-------------------------+
|Grade|Name_Description |
+-----+-------------------------+
|1 |[Brandon, Card] |
|2 |[Chris, Rock, is the man]|
+-----+-------------------------+
We are always encouraged to use spark sql functions and avoid using the UDFs as long as we can. I have a simplified solution for this which makes use of the spark sql functions.
Please find below my approach. Hope it helps.
val d = Array((1,Array("asdf_ Brandon","Ca%abc%rd")),(2,Array("fthhhhChris", "Rock", "is the %abc%man")))
val df = spark.sparkContext.parallelize(d).toDF("Grade","Name_Description")
This is how I created the input dataframe.
df.select('Grade,posexplode('Name_Description)).registerTempTable("data")
We explode the array along with the position of each element in the array. I register the dataframe in order to use a query to generate the required output.
spark.sql("""select Grade, collect_list(Names) from (select Grade,case when pos=0 then substring(col,7) else replace(col,"%abc%","") end as Names from data) a group by Grade""").show
This query will give out the required output. Hope this helps.
I have a List:hdtList which contain columns that represent the columns of a Hive table:
forecast_id bigint,period_year bigint,period_num bigint,period_name string,drm_org string,ledger_id bigint,currency_code string,source_system_name string,source_record_type string,gl_source_name string,gl_source_system_name string,year string
I have a List: partition_columns which contains two elements: source_system_name, period_year
Using the List: partition_columns, I am trying to match them and move the corresponding columns in List: hdtList to the end of it as below:
val (pc, notPc) = hdtList.partition(c => partition_columns.contains(c.takeWhile(x => x != ' ')))
But when I print them as: println(notPc.mkString(",") + "," + pc.mkString(","))
I see the output unordered as below:
forecast_id bigint,period_num bigint,period_name string,drm_org string,ledger_id bigint,currency_code string,source_record_type string,gl_source_name string,gl_source_system_name string,year string,period string,period_year bigint,source_system_name string
The columns period_year comes first and the source_system_name last. Is there anyway I can make data as below so that the order of columns in the List: partition_columns is maintained.
forecast_id bigint,period_num bigint,period_name string,drm_org string,ledger_id bigint,currency_code string,source_record_type string,gl_source_name string,gl_source_system_name string,year string,period string,source_system_name string,period_year bigint
I know there is an option to reverse a List but I'd like to learn if I can implement a collection that maintains that order of insert.
It doesn't matter which collections you use; you only use partition_columns to call contains which doesn't depend on its order, so how could it be maintained?
But your code does maintain order: it's just hdtList's.
Something like
// get is ugly, but safe here
val pc1 = partition_columns.map(x => pc.find(y => y.startsWith(x)).get)
after your code will give you desired order, though there's probably more efficient way to do it.
I am new to Spark. I have two tables in HDFS. One table(table 1) is a tag table,composed of some text, which could be some words or a sentence. Another table(table 2) has a text column. Every row could have more than one keyword in the table 1. my task is find out all the matched keywords in table 1 for the text column in table 2, and output the keyword list for every row in table 2.
The problem is I have to iterate every row in table 2 and table 1. If I produce a big list for table 1, and use a map function for table 2. I will still have to use a loop to iterate the list in the map function. And the driver shows the JVM memory limit error,even if the loop is not large(10 thousands time).
myTag is the tag list of table 1.
def ourMap(line: String, myTag: List[String]): String = {
var ret = line
val length = myTag.length
for (i <- 0 to length - 1) {
if (line.contains(myTag(i)))
ret = ret.replaceAll(myTag(i), "_")
}
ret
}
val matched = result.map(b => ourMap(b, tagList))
Any suggestion to finish this task? With or without Spark
Many thanks!
An example is as follows:
table1
row1|Spark
row2|RDD
table2
row1| Spark is a fast and general engine. RDD supports two types of operations.
row2| All transformations in Spark are lazy.
row3| It is for test. I am a sentence.
Expected result :
row1| Spark,RDD
row2| Spark
MAJOR EDIT:
The first table actually may contain sentences and not just simple keywords :
row1| Spark
row2| RDD
row3| two words
row4| I am a sentence
Here you go, considering the data sample that you have provided :
val table1: Seq[(String, String)] = Seq(("row1", "Spark"), ("row2", "RDD"), ("row3", "Hashmap"))
val table2: Seq[String] = Seq("row1##Spark is a fast and general engine. RDD supports two types of operations.", "row2##All transformations in Spark are lazy.")
val rdd1: RDD[(String, String)] = sc.parallelize(table1)
val rdd2: RDD[(String, String)] = sc.parallelize(table2).map(_.split("##").toList).map(l => (l.head, l.tail(0))).cache
We'll build an inverted index of the second data table which we will join to the first table :
val df1: DataFrame = rdd1.toDF("key", "value")
val df2: DataFrame = rdd2.toDF("key", "text")
val df3: DataFrame = rdd2.flatMap { case (row, text) => text.trim.split( """[^\p{IsAlphabetic}]+""")
.map(word => (word, row))
}.groupByKey.mapValues(_.toSet.toSeq).toDF("word", "index")
import org.apache.spark.sql.functions.explode
val results: RDD[(String, String)] = df3.join(df1, df1("value") === df3("word")).drop("key").drop("value").withColumn("index", explode($"index")).rdd.map {
case r: Row => (r.getAs[String]("index"), r.getAs[String]("word"))
}.groupByKey.mapValues(i => i.toList.mkString(","))
results.take(2).foreach(println)
// (row1,Spark,RDD)
// (row2,Spark)
MAJOR EDIT:
As mentioned in the comment : The specifications of the issue changed. Keywords are no longer simple keywords, they might be sentences. In that case, this approach wouldn't work, it's a different kind of problem. One way to do it is using Locality-sensitive hashing (LSH) algorithm for nearest neighbor search.
An implementation of this algorithm is available here.
The algorithm and its implementation are unfortunately too long to discuss on SO.
From what I could gather from your problem statement is that you are kind of trying to tag the data in Table 2 with the keywords which are present in Table 1. For this, instead of loading the Table1 as a list and then doing each keyword pattern matching for each row in Table2, do this :
Load Table1 as a hashSet.
Traverse the Table2 and for each word in that phrase, do a search in the above hashset. I assume the words that you shall have to search from here are less as compared to pattern matching for each keyword. Remember, search now is O(1) operation whereas pattern matching is not.
Also, in this process, you can also filter words like " is, are, when, if " etc as they shall never be used for tagging. So that reduces words you need to find in hashSet.
The hashSet can be loaded into memory(I think 10K keywords should not take more than few MBs). This variable can be shared across executors through broadcast variables.
I'm new to Spark Streaming. There's a project using Spark Streaming, the input is a key-value pair string like "productid,price".
The requirement is to process each line as a separate transaction, and make RDD triggered every 1 second.
In each interval I have to calculate the total price for each individual product, like
select productid, sum(price) from T group by productid
My current thought is that I have to do the following steps
1) split the whole line with \n val lineMap = lines.map{x=>x.split("\n")}
2) split each line with "," val
recordMap=lineMap.map{x=>x.map{y=>y.split(",")}}
Now I'm confused about how to make the first column as key and second column as value, and use reduceByKey function to get the total sum.
Please advise.
Thanks
Once you have split each row, you can do something like this:
rowItems.map { case Seq(product, price) => product -> price }
This way you obtain a DStream[(String, String)] on which you can apply pair transformations like reduceByKey (don't forget to import the required implicits).