Create another dataframe from existing Dataframe with alias value in spark sql - scala

i am using spark 1.6 with scala.
I have created a Dataframe which looks like below.
DATA
SKU, MAKE, MODEL, GROUP SUBCLS IDENT
IM, AN4032X, ADH3M032, RM, 1011, 0
IM, A3M4936, MP3M4936, RM, 1011, 0
IM, AK116BC, 3M4936P, 05, ABC, 0
IM, A-116-B, 16ECAPS, RM, 1011, 0
I am doing data validation and capture any record in new dataframe which violate the rule.
Rule:
Column “GROUP” must be character
Column “SUBCLS” must be NUMERIC
Column “IDENT” must be 0
The new Dataframe will looks like
AUDIT TABLE
SKU MAKE AUDIT_SKU AUDIT_MAKE AUDIT_MODEL AUDIT_GRP AUDIT_SUBCLS Audit_IDENT
IM, A-K12216BC, N, N, N, Y, Y, N
Y represent rule violation and N represent Rule pass.
i have validated rule using isnull or regex for ex:
checking column Group using
regex: df.where( $"GROUP".rlike("^[A-Za-z]}$")).show
May someone please help me how can i do this in SPARK SQL. is it possible to create a dataframe with the above scenario.
Thanks

you can use rlike with |
scala> df.withColumn("Group1",when($"GROUP".rlike("^[\\d+]|[A-Za-z]\\d+"),"Y").otherwise("N")).withColumn("SUBCLS1",when($"SUBCLS".rlike("^[0-9]"),"N").otherwise("Y")).withColumn("IDENT1",when($"IDENT"==="0","N").otherwise("Y")).show()
+---+-------+--------+-----+------+-----+------+-------+------+
|SKU| MAKE| MODEL|GROUP|SUBCLS|IDENT|Group1|SUBCLS1|IDENT1|
+---+-------+--------+-----+------+-----+------+-------+------+
| IM|AN4032X|ADH3M032| RM| 1011| 0| N| N| N|
| IM|A3M4936|MP3M4936| 1RM| 1011| 0| Y| N| N|
| IM|AK116BC| 3M4936P| 05| ABC| 0| Y| Y| N|
| IM|A-116-B| 16ECAPS| RM1| 1011| 0| Y| N| N|
+---+-------+--------+-----+------+-----+------+-------+------+
just write version 1 of each column for understanding purpose only you can overwrite column.
let me know if you need any help on the same.

Related

Spark Scala - Bitwise operation in filter

I have a source dataset aggregated with columns col1, and col2. Col2 values are aggregated by bitwise OR operation. I need to apply filter on the Col2 values to select records whose bits are on for 8,4,2
initial source raw data
val SourceRawData = Seq(("Flag1", 2),
("Flag1", 4),("Flag1", 8), ("Flag2", 8), ("Flag2", 16),("Flag2", 32)
,("Flag3", 2),("Flag4", 32),
("Flag5", 2), ("Flag5", 8)).toDF("col1", "col2")
SourceRawData.show()
+-----+----+
| col1|col2|
+-----+----+
|Flag1| 2|
|Flag1| 4|
|Flag1| 8|
|Flag2| 8|
|Flag2| 16|
|Flag2| 32|
|Flag3| 2|
|Flag4| 32|
|Flag5| 2|
|Flag5| 8|
+-----+----+
Aggregated source data based on 'SourceRawData above' after collapsing Col1 values to single row per Col1 value and this is provided other team and Col2 values are aggregated by Bitwise OR operation. Note I here i am providing the output not the actual aggregation logic
val AggregatedSourceData = Seq(("Flag1", 14L),
("Flag2", 56L),("Flag3", 2L), ("Flag4", 32L), ("Flag5", 10L)).toDF("col1", "col2")
AggregatedSourceData.show()
+-----+----+
| col1|col2|
+-----+----+
|Flag1| 14|
|Flag2| 56|
|Flag3| 2|
|Flag4| 32|
|Flag5| 10|
+-----+----+
Now I need to apply filter on the aggregated dataset above to get the rows whose col2 values meeting any of the (8,4,2) col2 bits are on as per the original source raw data values
expected output
+-----+----+
|Col1 |Col2|
+-----+----+
|Flag1|14 |
|Flag2|56 |
|Flag3|2 |
|Flag5|10 |
+-----+----+
I tried something like below and seems to be getting hte expected output but unable to understand how its working. Is this the correct approach?? if so ,how its working ( I am not that knowledgeable in bitwise operations so looking for expert help to understand please)
`
``
var myfilter:Long = 2 | 4| 8
AggregatedSourceData.filter($"col2".bitwiseAND(myfilter) =!= 0).show()
+-----+----+
| col1|col2|
+-----+----+
|Flag1| 14|
|Flag2| 56|
|Flag3| 2|
|Flag5| 10|
+-----+----+
I think you do not need to use bitWiseAnd to filter, instead, just use contains/in “A set of decimal representation of the bit numbers you want” or == to “a decimal representation of a bit number you want”
Also if you try your existing calculations without Scala or spark, you will see where you understood things wrong, eg use here :
https://www.rapidtables.com/calc/math/binary-calculator.html
you will find you defined you filter “wrong”.
18&18 is 18
18|2 is 18
Your dataset the flag column each row will only be one value, so just filter the flag column , whose values are in the set of numbers you want .
$"flag == 18 or
(18,2,20) contains $"flag for example

Set column value depending on previous ones with Spark without repeating grouping attribute

Given the DataFrame:
+------------+---------+
|variableName|dataValue|
+------------+---------+
| IDKey| I1|
| b| y|
| a| x|
| IDKey| I2|
| a| z|
| b| w|
| c| q|
+------------+---------+
I want to create a new column with corresponding IDKey values, where each value changes whenever the dataValue for IDKey changes, here's the expected output :
+------------+---------+----------+
|variableName|dataValue|idkeyValue|
+------------+---------+----------+
| IDKey| I1| I1|
| b| y| I1|
| a| x| I1|
| IDKey| I2| I2|
| a| z| I2|
| b| w| I2|
| c| q| I2|
+------------+---------+----------+
I tried by doing the following code which uses mapPartitions() and a global variable
var currentVarValue = ""
frame
.mapPartitions{ partition =>
partition.map { row =>
val (varName, dataValue) = (row.getString(0), row.getString(1))
val idKeyValue = if (currentVarValue != dataValue && varName == "IDKey") {
currentVarValue = dataValue
dataValue
} else {
currentVarValue
}
ExtendedData(varName, dataValue, currentVarValue)
}
}
But this won't work because of two fundamental things: Spark doesn't handle global variables and also, this doesn't comply with functional programming style
I will gladly appreciate any help on this. Thanks!
You cannot solve this elegantly and performant in a Spark way as
there is not enough initial information provided for Spark to process
all data guaranteed to be in the same partition. If we do all
processing in the same partition, then this is not the true intent of
Spark.
In fact a sensible partitionBy cannot be issued (over Window function). The issue here is that the data represents a long list of sequential such data that would require looking across partitions for if data in the previous partition relates to the current partition. That could be done, but it's quite a job. zero323 has an answer somewhere here that tries to solve this, but if I remember correctly, it is cumbersome.
The logic to do it is easy enough, but using Spark is problematic for this.
Without a partitionBy data all gets shuffled to a single partition and could result in OOM and space problems.
Sorry.

Spark - group and aggregate only several smallest items

In short
I have cartesian-product (cross-join) of two dataframes and function which gives some score for given element of this product. I want now to get few "best matched" elements of the second DF for every member of the first DF.
In details
What follows is a simplified example as my real code is somewhat bloated with additional fields and filters.
Given two sets of data, each having some id and value:
// simple rdds of tuples
val rdd1 = sc.parallelize(Seq(("a", 31),("b", 41),("c", 59),("d", 26),("e",53),("f",58)))
val rdd2 = sc.parallelize(Seq(("z", 16),("y", 18),("x",3),("w",39),("v",98), ("u", 88)))
// convert them to dataframes:
val df1 = spark.createDataFrame(rdd1).toDF("id1", "val1")
val df2 = spark.createDataFrame(rdd2).toDF("id2", "val2")
and some function which for pair of the elements from the first and second dataset gives their "matching score":
def f(a:Int, b:Int):Int = (a * a + b * b * b) % 17
// convert it to udf
val fu = udf((a:Int, b:Int) => f(a, b))
we can create the product of two sets and calculate score for every pair:
val dfc = df1.crossJoin(df2)
val r = dfc.withColumn("rez", fu(col("val1"), col("val2")))
r.show
+---+----+---+----+---+
|id1|val1|id2|val2|rez|
+---+----+---+----+---+
| a| 31| z| 16| 8|
| a| 31| y| 18| 10|
| a| 31| x| 3| 2|
| a| 31| w| 39| 15|
| a| 31| v| 98| 13|
| a| 31| u| 88| 2|
| b| 41| z| 16| 14|
| c| 59| z| 16| 12|
...
And now we want to have this result grouped by id1:
r.groupBy("id1").agg(collect_set(struct("id2", "rez")).as("matches")).show
+---+--------------------+
|id1| matches|
+---+--------------------+
| f|[[v,2], [u,8], [y...|
| e|[[y,5], [z,3], [x...|
| d|[[w,2], [x,6], [v...|
| c|[[w,2], [x,6], [v...|
| b|[[v,2], [u,8], [y...|
| a|[[x,2], [y,10], [...|
+---+--------------------+
But really we want only to retain only few (say 3) of "matches", those having the best score (say, least score).
The question is
How to get the "matches" sorted and reduced to top-N elements? Probably it is something about collect_list and sort_array, though I don't know how to sort by inner field.
Is there a way to ensure optimization in case of large input DFs - e.g. choosing minimums directly while aggregating. I know it could be done easily if I wrote the code without spark - keeping small array or priority queue for every id1 and adding element where it should be, possibly dropping out some previously added.
E.g. it's ok that cross-join is costly operation, but I want to avoid wasting memory on the results most of which I'm going to drop in the next step. My real use case deals with DFs with less than 1 mln entries so cross-join is yet viable but as we want to select only 10-20 top matches for each id1 it seems to be quite desirable not to keep unnecessary data between steps.
For start we need to take only the first n rows. To do this we are partitioning the DF by 'id1' and sorting the groups by the res. We use it to add row number column to the DF, like that we can use where function to take the first n rows. Than you can continue doing the same code your wrote. Grouping by 'id1' and collecting the list. Only now you already have the highest rows.
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
val n = 3
val w = Window.partitionBy($"id1").orderBy($"res".desc)
val res = r.withColumn("rn", row_number.over(w)).where($"rn" <= n).groupBy("id1").agg(collect_set(struct("id2", "res")).as("matches"))
A second option that might be better because you won't need to group the DF twice:
val sortTakeUDF = udf{(xs: Seq[Row], n: Int)} => xs.sortBy(_.getAs[Int]("res")).reverse.take(n).map{case Row(x: String, y:Int)}}
r.groupBy("id1").agg(sortTakeUDF(collect_set(struct("id2", "res")), lit(n)).as("matches"))
In here we create a udf that take the array column and an integer value n. The udf sorts the array by your 'res' and returns only the first n elements.

How to convert numerical values to a categorical variable using pyspark

pyspark dataframe which have a range of numerical variables.
for eg
my dataframe have a column value from 1 to 100.
1-10 - group1<== the column value for 1 to 10 should contain group1 as value
11-20 - group2
.
.
.
91-100 group10
how can i achieve this using pyspark dataframe
# Creating an arbitrary DataFrame
df = spark.createDataFrame([(1,54),(2,7),(3,72),(4,99)], ['ID','Var'])
df.show()
+---+---+
| ID|Var|
+---+---+
| 1| 54|
| 2| 7|
| 3| 72|
| 4| 99|
+---+---+
Once the DataFrame has been created, we use floor() function to find the integral part of a number. For eg; floor(15.5) will be 15. We need to find the integral part of the Var/10 and add 1 to it, because the indexing starts from 1, as opposed to 0. Finally, we have need to prepend group to the value. Concatenation can be achieved with concat() function, but keep in mind that since the prepended word group is not a column, so we need to put it inside lit() which creates a column of a literal value.
# Requisite packages needed
from pyspark.sql.functions import col, floor, lit, concat
df = df.withColumn('Var',concat(lit('group'),(1+floor(col('Var')/10))))
df.show()
+---+-------+
| ID| Var|
+---+-------+
| 1| group6|
| 2| group1|
| 3| group8|
| 4|group10|
+---+-------+

How to group by a column on a dataframe and applying single value to columns of all rows grouped?

I have a dataframe(scala) and I want to do something like below on the dataframe:
I want to group by column 'a' and select any of the value from column 1 out of the grouped columns and apply it on all rows.I.e for a=1, then b should be either x or y or h on all 3 rows and the rest of the columns should be unaffected.
any help on this?
You can try this, i.e, create another data frame that contains a, b columns where b has one value per a and then join it back with the original data frame:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.row_number
val w = Window.partitionBy($"a").orderBy($"b")
// create the window object so that we can create a column that gives unique row number
// for each unique a
(df.withColumn("rn", row_number.over(w)).where($"rn" === 1).select("a", "b")
// create the row number column for each unique a and choose the first row for each group
// which returns a reduced data frame one row per group
.join(df.select("a", "c"), Seq("a"), "inner").show)
// join the reduced data frame back with the original data frame(a,c columns), then b column
// will have just one value
+---+---+---+
| a| b| c|
+---+---+---+
| 1| h| g|
| 1| h| y|
| 1| h| x|
| 2| c| d|
| 2| c| x|