get multiple columns within a map: rdd - scala

I've a DF that I'm explicitly converting into an RDD and trying to fetch each column's record. Not able to fetch each of them within a map. Below is what I've tried:
val df = sql("Select col1, col2, col3, col4, col5 from tableName").rdd
The resultant df becomes the member of org.apache.spark.rdd.RDD[org.apache.spark.sql.Row]
Now I'm trying to access each element of this RDD via:
val dfrdd = df.map{x => x.get(0); x.getAs[String](1); x.get(3)}
The issue is, the above statement returns only the data present on the last transformation of map i.e., the data present on x.get(3). Can someone let me know what I'm doing wrong?

The last line is always returned from the map, In your case x.get(3) gets returned.
To return multiple values you can return tuples as below
val dfrdd = df.map{x => (x.get(0), x.getAs[String](1), x.get(3))}
Hope this helped!

Related

Storing Spark DataFrame Value In scala Variable

I need to Check the duplicate filename in my table and if file count is 0 then i need to load a file in my table using sparkSql. I wrote below code.
val s1=spark.sql("select count(filename) from mytable where filename='myfile.csv'") //giving '2'
s1: org.apache.spark.sql.DataFrame = [count(filename): bigint]
s1.show //giving 2 as output
//s1 is giving me the filecount from my table then i need to compare this count value using if statement.
I'm using below code.
val s2=s1.count //not working always giving 1
val s2=s1.head.count() // error: value count is not a member of org.apache.spark.sql.Row
val s2=s1.size //value size is not a member of Unit
if(s1>0){ //code } //value > is not a member of org.apache.spark.sql.DataFrame
can someone please give me a hint how should i do this.How can i get the dataframe value and can use as variable to check the condition.
i.e.
if(value of s1(i.e.2)>0){
//my code
}
You need to extract the value itself. Count will return the number of rows in the df, which is just one row.
So you can keep your original query and extract the value after with first and getInt methods
val s1 = spark.sql("select count(filename) from mytable where filename='myfile.csv'")`
val valueToCompare = s1.first().getInt(0)
And then:
if(valueToCompare>0){
//my code
}
Another option is performing the count outside the query, then the count will give you the desired value:
val s1 = spark.sql("select filename from mytable where filename='myfile.csv'")
if(s1.count>0){
//my code
}
I like the most the second option, but there is no reason other than that i think it is more clear
spark.sql("select count(filename) from mytable where filename='myfile.csv'") returns a dataframe and you need to extract both the first row and the first column of that row. It is much simpler to directly filter the dataset and count the number of rows in Scala:
val s1 = df.filter($"filename" === "myfile.csv").count
if (s1 > 0) {
...
}
where df is the dataset that corresponds to the mytable table.
If you got the table from some other source and not by registering a view, use SparkSession.table() to get a dataframe using the instance of SparkSession that you already have. For example, in Spark shell the pre-set variable spark holds the session and you'll do:
val df = spark.table("mytable")
val s1 = df.filter($"filename" === "myfile.csv").count

Transform rows to multiple rows in Spark Scala

I have a problem where I need to transform one row to multiple rows. This is based on a different mapping that I have. I have tried to provide an example below.
Suppose I have a parquet file with the below schema
ColA, ColB, ColC, Size, User
I need to aggregate the above data into multiple rows based on a lookup map. Suppose I have a static map
ColA, ColB, Sum(Size)
ColB, ColC, Distinct (User)
ColA, ColC, Sum(Size)
This means that one row in the input RDD needs to be transformed to 3 aggregate. I believe RDD is the way to go with FlatMapPair, but I am not sure how to go about this.
I am also OK to concat the columns into one key, something like ColA_ColB etc.
For creating multiple aggregates from the same data, I have started with something like this
val keyData: PairFunction[Row, String, Long] = new PairFunction[Row, String, Long]() {
override def call(x: Row) = {
(x.getString(1),x.getLong(5))
}
}
val ip15M = spark.read.parquet("a.parquet").toJavaRDD
val pairs = ip15M.mapToPair(keyData)
java.util.List[(String, Long)] = [(ios,22), (ios,23), (ios,10), (ios,37), (ios,26), (web,52), (web,1)]
I believe I need to do flatmaptopair instead of mapToPair. On similar lines, I tried
val FlatMapData: PairFlatMapFunction[Row, String, Long] = new PairFlatMapFunction[Row, String, Long]() {
override def call(x: Row) = {
(x.getString(1),x.getLong(5))
}
}
but it is giving Error
Expression of type (String, Long) doesn't conform to expected type util.Iterator[(String, Long)]
Any help is appreciated. Please let me know if I need to add any more details.
the outcome should only have 3 columns? I mean col1, col2, col3 (the agg outcome).
The second aggregate is a distinct count of users? (I assume yes).
If so you can basically create 3 data frames and then union them.
Something in the way of:
val df1 = spark.sql("select colA as col1, colB as col2 ,sum(Size) as colAgg group by colA,colB")
val df2 = spark.sql("select colB as col1, colC as col2 ,Distinct(User) as colAgg group by colB,colC")
val df3 = spark.sql("select colA as col1, colC as col2 ,sum(Size) as colAgg group by colA,colC")
df1.union(df2).union(df3)

Dataframe : GroupBy by list of column names [duplicate]

This question already has an answer here:
Scala-Spark Dynamically call groupby and agg with parameter values
(1 answer)
Closed 4 years ago.
I have a Dataframe with multiple columns and a List of column names.
I want to process my Dataframe by grouping it according to my list.
Here is an example of what I am trying to do :
val tagList = List("col1","col3","col5")
var tagsForGroupBy = tagList(0)
if(tagList.length>1){
for(i <- 1 to tagList.length-1){
tagsForGroupBy = tagsForGroupBy+","+tags(i)
}
}
// df is a Dataframe with schema (col0, col1, col2, col3, col4, col5)
df.groupBy("col0",tagsForGroupBy)
I understand why it does not work, but I don't know how to make it work.
What is the best solution to do that ?
EDIT :
Here is a more complete example of what I am doing (including SCouto solution) :
I have my tagList that contains some column names ("col3","col5"). I also want to include "col0" and "col1" in my groupBy, independently of my list.
After my groupBy and my aggregations, I want to select all columns used for group By and the new columns from aggregation.
val tagList = List("col3","col5")
val tmpListForGroup = new ListBuffer[String]()
val tmpListForSelect = new ListBuffer[String]()
tmpListForGroup +=tagList (0)
tmpListForSelect +=tagList (0)
for(i <- 1 to tagList .length-1){
tmpListForGroup +=(tagList (i))
tmpListForSelect +=(tagList (i))
}
tmpListForGroup +="col0"
tmpListForGroup +="col1"
tmpListForSelect +="aggValue1"
tmpListForSelect +="aggValue2"
// df is a Dataframe with schema (col0, col1, col2, col3, col4, col5)
df.groupBy(tmpListForGroup.head,tmpListForGroup.tail:_*)
.agg(
[aggFunction].as("aggValue1"),
[aggFunction].as("aggValue1"))
)
.select(tmpListForSelect .head,tmpListForSelect .tail:_*)
This code do exactly what I want, but it look very ugly and complicated for something that (I think) should be simple.
Is there another solution for that ?
When sending column names as Strings, groupBy receives a column as first parameter and a sequence of them as second:
def groupBy(col1: String,cols: String*)
So you need to send two arguments and convert the second one to a sequence:
This will work fine for you:
df.groupBy(tagsForGroupBy.head, tagsForGroupBy.tail:_*)
Or if you want to separete col0 from the list as in your example:
df.groupBy("col0", tagsForGroupBy:_*)

value head is not a member of org.apache.spark.sql.Row

I am executing twitter sample code, while i am getting error for value head is not a member of org.apache.spark.sql.Row, can someone please explain little bit more on this error.
val tweets = sc.textFile(tweetInput)
println("------------Sample JSON Tweets-------")
for (tweet <- tweets.take(5)) {
println(gson.toJson(jsonParser.parse(tweet)))
}
val tweetTable = sqlContext.jsonFile(tweetInput).cache()
tweetTable.registerTempTable("tweetTable")
println("------Tweet table Schema---")
tweetTable.printSchema()
println("----Sample Tweet Text-----")
sqlContext.sql("SELECT text FROM tweetTable LIMIT 10").collect().foreach(println)
println("------Sample Lang, Name, text---")
sqlContext.sql("SELECT user.lang, user.name, text FROM tweetTable LIMIT 1000").collect().foreach(println)
println("------Total count by languages Lang, count(*)---")
sqlContext.sql("SELECT user.lang, COUNT(*) as cnt FROM tweetTable GROUP BY user.lang ORDER BY cnt DESC LIMIT 25").collect.foreach(println)
println("--- Training the model and persist it")
val texts = sqlContext.sql("SELECT text from tweetTable").map(_.head.toString)
// Cache the vectors RDD since it will be used for all the KMeans iterations.
val vectors = texts.map(Utils.featurize).cache()
I think your problem is that the sql method returns a DataSet of Rows. Therefore the _ represents a Row and Row doesn't have a head method (which explains the error message).
To access items in a Row you can do one of the following:
// get the first element in the Row
val texts = sqlContext.sql("...").map(_.get(0))
// get the first element as an Int
val texts = sqlContext.sql("...").map(_.getInt(0))
See here for more info: https://spark.apache.org/docs/2.1.0/api/java/org/apache/spark/sql/Row.html

How to insert record into a dataframe in spark

I have a dataframe (df1) which has 50 columns, the first one is a cust_id and the rest are features. I also have another dataframe (df2) which contains only cust_id. I'd like to add one records per customer in df2 to df1 with all the features as 0. But as the two dataframe have two different schema, I cannot do a union. What is the best way to do that?
I use a full outer join but it generates two cust_id columns and I need one. I should somehow merge these two cust_id columns but don't know how.
You can try to achieve something like that by doing a full outer join like the following:
val result = df1.join(df2, Seq("cust_id"), "full_outer")
However, the features are going to be null instead of 0. If you really need them to be zero, one way to do it would be:
val features = df1.columns.toSet - "cust_id" // Remove "cust_id" column
val newDF = features.foldLeft(df2)(
(df, colName) => df.withColumn(colName, lit(0))
)
df1.unionAll(newDF)