I have a Column in dataframe called PANCARD.
PANCARD Values are:
DEAS1067SS | BEAT1123Z
And I want PANCARD values to be Sorted like below:
0167ADESS
1123ABETZ
Please help!
I am Doing DF.sort($PANCARD), but this is not working
Actual Values:
DEAS1067S | BEAT1123Z
Expected Values:
0167ADESS | 1123ABETZ
Finally Written the UDF Function.
val sortedValue: String => String =_.map(_.toInt).sorted.map(_.toChar).mkString("")
import org.apache.spark.sql.function.udf
val sortedUdf = udf(sortedValue)
df.withColumn("PANCARD", sortedUdf(col("PANCARD")))
df.show()
You can try by converting column to a String in map function and re-create a dataframe again.
val result = DF.map(r => { val x = r.getAs[String]("pancard")
val y = x.split(" ").map(_.sorted).mkString(" ")
Row(r(0),y) })
val newDF = sqlContext.createDataFrame(result, DF.schema) ```
Related
I'm basically trying to do something like this but spark doesn’t recognizes it.
val colsToLower: Array[String] = Array("col0", "col1", "col2")
val selectQry: String = colsToLower.map((x: String) => s"""lower(col(\"${x}\")).as(\"${x}\"), """).mkString.dropRight(2)
df
.select(selectQry)
.show(5)
Is there a way to do something like this in spark/scala?
If you need to lowercase the name of your columns there is a simple way of doing it. Here is one example:
df.columns.foreach(c => {
val newColumnName = c.toLowerCase
df = df.withColumnRenamed(c, newColumnName)
})
This will allow you to lowercase the column names, and update it in the spark dataframe.
I believe I found a way to build it:
def lowerTextColumns(cols: Array[String])(df: DataFrame): DataFrame = {
val remainingCols: String = (df.columns diff cols).mkString(", ")
val lowerCols: String = cols.map((x: String) => s"""lower(${x}) as ${x}, """).mkString.dropRight(2)
val selectQry: String =
if (colsToSelect.nonEmpty) lowerCols + ", " + remainingCols
else lowerCols
df
.selectExpr(selectQry.split(","):_*)
}
I have key values pair in Map[int,string]. I need to save this value in Hive table using spark dataframe. But i am getting error - Expected column, actual Map[int,string]
Code:
val dbValuePairs = Array(2019,10)
val dbkey = dbValuePairs.map(x => x).zipWithIndex.map(t => (t._2, t._1)).toMap
val dqMetrics = spark.sql("select * from dqMetricsStagingTbl")
.withColumn("Dataset_Name", lit(Dataset_Name))
.withColumn("Key", dbkey)
dqMetrics.createOrReplaceTempView("tempTable")
spark.sql("create table if not exists hivetable AS select * from tempTable")
dqMetrics.write.mode("append").insertInto(hivetable)
Please help! Error in withColumn("Key", dbkey) line
Look at Spark function withColumn signature:
def withColumn(colName: String, col: Column): DataFrame
it takes two arguments: colName as String and col as Column.
your dbkey type is Map[Int, Int] that is not Column:
val dbkey: Map[Int, Int] = dbValuePairs.map(x => x).zipWithIndex.map(t => (t._2, t._1)).toMap
if you want store Map in your table column you can use map function which takes a sequence of Column:
// from object org.apache.spark.sql.functions
def map(cols: Column*): Column
so you can convert your dbkey to Seq[Column] and pass it to withColumn function:
val dbValuePairs = Array(2019,10)
val dbkey: Map[Int, Int] = dbValuePairs.map(x => x).zipWithIndex.map(t => (t._2, t._1)).toMap
val dbkeyColumnSeq: Seq[Column] = dbkey.flatMap(t => Seq(lit(t._2), lit(t._1))).toSeq
val dqMetrics = spark.sql("select * from dqMetricsStagingTbl")
.withColumn("Dataset_Name", lit(""))
.withColumn("Key", map(dbkeyColumnSeq:_*))
My current DataFrame looks like as below:
{"id":"1","inputs":{"values":{"0.2":[1,1],"0.4":[1,1],"0.6":[1,1]}},"id1":[1,2]}
I want to transform this dataframe into the below dataFrame:
{"id":"1", "v20":[1,1],"v40":[1,1],"v60":[1,1],"id1":[1,2]}
This means that, each 'values' array's items (0.2, 0.4 and 0.6) will be multiplied by 100, prepended with the letter 'v', and extracted into separate columns.
How does the code would look like in order to achieve this. I have tried withColumn but couldn't achieve this.
Try the below code and please find the inline comments for the code explanation
import org.apache.spark.sql.SaveMode
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.StructType
object DynamicCol {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().master("local[*]").getOrCreate()
val df = spark.read.json("src/main/resources/dyamicCol.json") /// Load the JSON file
val dfTemp = df.select(col("inputs.values").as("values")) // Temp Dataframe for fetching the nest values
val index = dfTemp
.schema.fieldIndex("values")
val propSchema = dfTemp.schema(index).dataType.asInstanceOf[StructType]
val dfFinal = propSchema.fields.foldLeft(df)( (df,field) => { // Join Dataframe with the list of nested columns
val colNameInt = (field.name.toDouble * 100).toInt
val colName = s"v$colNameInt"
df.withColumn(colName,col("inputs.values.`" + field.name + "`")) // Add the nested column mappings
} ).drop("inputs") // Drop the extra column
dfFinal.write.mode(SaveMode.Overwrite).json("src/main/resources/dyamicColOut.json") // Output the JSON file
}
}
I would make the logic for the change of column name splitter into 2 parts, the one that is a numeric value, and the one that doesn't change.
def stringDecimalToVNumber(colName:String): String =
"v" + (colName.toFloat * 100).toInt.toString
and form a single function that transforms according to the case
val floatRegex = """(\d+\.?\d*)""".r
def transformColumnName(colName:String): String = colName match {
case floatRegex(v) => stringDecimalToVNumber(v) //it's a float, transform it
case x => x // keep it
now we have the function to transform the end of the columns, let's pick the schema dynamicly.
val flattenDF = df.select("id","inputs.values.*")
val finalDF = flattenDF
.schema.names
.foldLeft(flattenDF)((dfacum,x) => {
val newName = transformColumnName(x)
if (newName == x)
dfacum // the name didn't need to be changed
else
dfacum.withColumnRenamed(x, transformColumnName(x))
})
This will dynamically transform all the columns inside inputs.values to the new name, and put them in next to id.
How to convert one var to two var List?
Below is my input variable:
val input="[level:1,var1:name,var2:id][level:1,var1:name1,var2:id1][level:2,var1:add1,var2:city]"
I want my result should be:
val first= List(List("name","name1"),List("add1"))
val second= List(List("id","id1"),List("city"))
First of all, input is not a valid json
val input="[level:1,var1:name,var2:id][level:1,var1:name1,var2:id1][level:2,var1:add1,var2:city]"
You have to make it valid json RDD ( as you are going to use apache spark)
val validJsonRdd = sc.parallelize(Seq(input)).flatMap(x => x.replace(",", "\",\"").replace(":", "\":\"").replace("[", "{\"").replace("]", "\"}").replace("}{", "}&{").split("&"))
Once you have valid json rdd, you can easily convert that to dataframe and then apply the logic you have
import org.apache.spark.sql.functions._
val df = spark.read.json(validJsonRdd)
.groupBy("level")
.agg(collect_list("var1").as("var1"), collect_list("var2").as("var2"))
.select(collect_list("var1").as("var1"), collect_list("var2").as("var2"))
You should get desired output in dataframe as
+------------------------------------------------+--------------------------------------------+
|var1 |var2 |
+------------------------------------------------+--------------------------------------------+
|[WrappedArray(name1, name2), WrappedArray(add1)]|[WrappedArray(id1, id2), WrappedArray(city)]|
+------------------------------------------------+--------------------------------------------+
And you can convert the array to list if required
To get the values as in the question, you can do the following
val rdd = df.collect().map(row => (row(0).asInstanceOf[Seq[Seq[String]]], row(1).asInstanceOf[Seq[Seq[String]]]))
val first = rdd(0)._1.map(x => x.toList).toList
//first: List[List[String]] = List(List(name1, name2), List(add1))
val second = rdd(0)._2.map(x => x.toList).toList
//second: List[List[String]] = List(List(id1, id2), List(city))
I hope the answer is helpful
reduceByKey is the important function to achieve your required output. More explaination on step by step reduceByKey explanation
You can do the following
val input="[level:1,var1:name1,var2:id1][level:1,var1:name2,var2:id2][level:2,var1:add1,var2:city]"
val groupedrdd = sc.parallelize(Seq(input)).flatMap(_.split("]\\[").map(x => {
val values = x.replace("[", "").replace("]", "").split(",").map(y => y.split(":")(1))
(values(0), (List(values(1)), List(values(2))))
})).reduceByKey((x, y) => (x._1 ::: y._1, x._2 ::: y._2))
val first = groupedrdd.map(x => x._2._1).collect().toList
//first: List[List[String]] = List(List(add1), List(name1, name2))
val second = groupedrdd.map(x => x._2._2).collect().toList
//second: List[List[String]] = List(List(city), List(id1, id2))
I am new to Spark and Scala, now I'm somehow stuck with a problem: how to handle different field of each row by field name, then into a new rdd.
This is my pseudo code:
val newRdd = df.rdd.map(x=>{
def Random1 => random(1,10000) //pseudo
def Random2 => random(10000,20000) //pseduo
x.schema.map(y=> {
if (y.name == "XXX1")
x.getAs[y.dataType](y.name)) = Random1
else if (y.name == "XXX2")
x.getAs[y.dataType](y.name)) = Random2
else
x.getAs[y.dataType](y.name)) //pseduo,keeper the same
})
})
There are 2 less errors in above:
the second map,"x.getAs" is a error syntax
how to into a new rdd
I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.
Thanks Ramesh Maharjan, it works now.
def randomString(len: Int): String = {
val rand = new scala.util.Random(System.nanoTime)
val sb = new StringBuilder(len)
val ab = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
for (i <- 0 until len) {
sb.append(ab(rand.nextInt(ab.length)))
}
sb.toString
}
def testUdf = udf((value: String) =>randomString(2))
val df = sqlContext.createDataFrame(Seq((1,"Android"), (2, "iPhone")))
df.withColumn("_2", testUdf(df("_2")))
+---+---+
| _1| _2|
+---+---+
| 1| F3|
| 2| Ag|
+---+---+
If you are intending to filter certain felds "XXX1" "XXX2" then simple select function should do the trick
df.select("XXX1", "XXX2")
and convert that to rdd
If you are intending something else then your x.getAs should look as below
val random1 = x.getAs(y.name)
It seems that you are trying to change values in some columns "XXX1" and "XXX2"
For that a simple udf function and withColumn should do the trick
Simple udf function is as below
def testUdf = udf((value: String) => {
//do your logics here and what you return from here would be reflected in the value you passed from the column
})
And you can call the udf function as
df.withColumn("XXX1", testUdf(df("XXX1")))
Similarly you can do for "XXX2"