Spark merge rows based on some condition and retain the values - scala

I have some dataframe like below :
+--------------------+-------------+----------+--------------------+------------------+
| cond | val | val1 | val2 | val3 |
+--------------------+-------------+----------+--------------------+------------------+
|cond1 | 1 | null | null | null |
|cond1 | null | 2 | null | null |
|cond1 | null | null | 3 | null |
|cond1 | null | null | null | 4 |
|cond2 | null | null | null | 44 |
|cond2 | null | 22 | null | null |
|cond2 | null | null | 33 | null |
|cond2 | 11 | null | null | null |
|cond3 | null | null | null | 444 |
|cond3 | 111 | 222 | null | null |
|cond3 | 1111 | null | null | null |
|cond3 | null | null | 333 | null |
I want to reduce the numbers based value of the some column, I want the resultant column to look like below :
+--------------------+-------------+----------+--------------------+------------------+
| cond | val | val1 | val2 | val3 |
+--------------------+-------------+----------+--------------------+------------------+
|cond1 | 1 | 2 | 3 | 4 |
|cond2 | 11 | 22 | 33 | 44 |
|cond3 | 111,1111 | 222 | 333 | 444 |

Try using .groupBy() and .agg() e.g.
val output = input.groupBy("cond")
.agg(collect_list("val").name("val"))
.agg(collect_list("val1").name("val1"))
.agg(collect_list("val2").name("val2"))
.agg(collect_list("val3").name("val3"))

Related

How to add a validation on delta table column dynamically?

I'm working on a transformation and stuck with a common problem. Any assist is well appreciated.
Scenario:
Step-1: Reading from a delta table.
+--------+------------------+
| emp_id | str |
+--------+------------------+
| 1 | name=qwerty. |
| 2 | age=22 |
| 3 | job=googling |
| 4 | dob=12-Jan-2001 |
| 5 | weight=62.7. |
+--------+------------------+
Step-2: I'm refining the data and outputting it into another delta table dynamically (No predefined schema). Let's say I'm adding null if the column name is not found.
+--------+--------+------+----------+-------------+--------+
| emp_id | name | age | job | dob | weight |
+--------+--------+------+----------+-------------+--------+
| 1 | qwerty | null | null | null | null |
| 2 | null | 22 | null | null | null |
| 3 | null | null | googling | null | null |
| 4 | null | null | null | 12-Jan-2001 | null |
| 5 | null | null | null | null | 62.7 |
+--------+--------+------+----------+-------------+--------+
Is there a way to apply validation in step-2 based on the column name? I'm splitting it by = while deriving the above table. Or do I have to do validation in step-3 while working on the new df?
Second question: Is there a way to achieve the following table?
+--------+--------+------+----------+-------------+--------+---------------------+
| emp_id | name | age | job | dob | weight | missing_attributes |
+--------+--------+------+----------+-------------+--------+---------------------+
| 1 | qwerty | null | null | null | null | age,job,dob,weight |
| 2 | null | 22 | null | null | null | name,job,dob,weight |
| 3 | null | null | googling | null | null | name,age,dob,weight |
| 4 | null | null | null | 12-Jan-2001 | null | name,age,job,weight |
| 5 | null | null | null | null | 62.7 | name,age,job,dob |
+--------+--------+------+----------+-------------+--------+---------------------+

How to replace null values in a dataframe based on values in other dataframe?

Here's a dataframe, df1, I have
+---------+-------+---------+
| C1 | C2 | C3 |
+---------+-------+---------+
| xr | 1 | ixfg |
| we | 5 | jsfd |
| r5 | 7 | hfga |
| by | 8 | srjs |
| v4 | 4 | qwks |
| c0 | 0 | khfd |
| ba | 2 | gdbu |
| mi | 1 | pdlo |
| lp | 7 | ztpq |
+---------+-------+---------+
Here's another, df2, that I have
+----------+-------+---------+
| V1 | V2 | V3 |
+----------+-------+---------+
| Null | 6 | ixfg |
| Null | 2 | jsfd |
| Null | 2 | hfga |
| Null | 7 | qwks |
| Null | 1 | khfd |
| Null | 9 | gdbu |
+----------+-------+---------+
What I would like to have is another dataframe that
Ignores values in V2 and takes values in C2 whereever V3 and C3 match, and
Replaces V1 with values in C1 wherever V3 and C3 match.
The result should look like the following:
+----------+-------+---------+
| M1 | M2 | M3 |
+----------+-------+---------+
| xr | 1 | ixfg |
| we | 5 | jsfd |
| r5 | 7 | hfga |
| v4 | 4 | qwks |
| c0 | 0 | khfd |
| ba | 2 | gdbu |
+----------+-------+---------+
You can join and use coalesce to take a value which has a higher priority.
** coalesce will take any number of columns (the highest priority to least in the order of arguments) and return first non-null value, so if you do want to replace with null when there is a null in the lower priority column, you cannot use this function.
df = (df1.join(df2, on=(df1.C3 == df2.V3))
.select(F.coalesce(df1.C1, df2.V1).alias('M1'),
F.coalesce(df2.V2, df1.C2).alias('M2'),
F.col('C3').alias('M3')))

Flagging records after meeting a condition using Spark Scala

I need some expert opinion on the below scenario:
I have following dataframe df1:
+------------+------------+-------+-------+
| Date1 | OrderDate | Value | group |
+------------+------------+-------+-------+
| 10/10/2020 | 10/01/2020 | hostA | grp1 |
| 10/01/2020 | 09/30/2020 | hostB | grp1 |
| Null | 09/15/2020 | hostC | grp1 |
| 08/01/2020 | 08/30/2020 | hostD | grp1 |
| Null | 10/01/2020 | hostP | grp2 |
| Null | 09/28/2020 | hostQ | grp2 |
| 07/11/2020 | 08/08/2020 | hostR | grp2 |
| 07/01/2020 | 08/01/2020 | hostS | grp2 |
| NULL | 07/01/2020 | hostL | grp2 |
| NULL | 08/08/2020 | hostM | grp3 |
| NULL | 08/01/2020 | hostN | grp3 |
| NULL | 07/01/2020 | hostO | grp3 |
+------------+------------+-------+-------+
Each group is ordered by OrderDate in descending order. Post ordering, Each value having Current_date < (Date1 + 31Days) or Date1 as NULL needs to be flagged as valid until Current_date > (Date1 + 31Days).
Post that, every Value should be marked as Invalid irrespective of Date1 value.
If for a group, all the records are NULL, all the Value should be tagged as Valid
My output df should look like below:
+------------+------------+-------+-------+---------+
| Date1 | OrderDate | Value | group | Flag |
+------------+------------+-------+-------+---------+
| 10/10/2020 | 10/01/2020 | hostA | grp1 | Valid |
| 10/01/2020 | 09/30/2020 | hostB | grp1 | Valid |
| Null | 09/15/2020 | hostC | grp1 | Valid |
| 08/01/2020 | 08/30/2020 | hostD | grp1 | Invalid |
| Null | 10/01/2020 | hostP | grp2 | Valid |
| Null | 09/28/2020 | hostQ | grp2 | Valid |
| 07/11/2020 | 08/08/2020 | hostR | grp2 | Invalid |
| 07/01/2020 | 08/01/2020 | hostS | grp2 | Invalid |
| NULL | 07/01/2020 | hostL | grp2 | Invalid |
| NULL | 08/08/2020 | hostM | grp3 | Valid |
| NULL | 08/01/2020 | hostN | grp3 | Valid |
| NULL | 07/01/2020 | hostO | grp3 | Valid |
+------------+------------+-------+-------+---------+
My approach:
I created row_number for each group after ordering by OrderDate.
Post that i am getting the min(row_number) having Current_date > (Date1 + 31Days) for each group and save it as new dataframe dfMin.
I then join it with df1 and dfMin on group and filter based on row_number(row_number < min(row_number))
This approach works for most cases. But when for a group all values of Date1 are NULL, this approach fails.
Is there any other better approach to include the above scenario as well?
Note: I am using pretty old version of Spark- Spark 1.5. Also windows function won't work in my environment(Its a custom framework and there are many restrictions in place). For row_number, i used zipWithIndex method.

Replace null by negative id number in not consecutive rows in hive

I have this table in my database:
| id | desc |
|-------------|
| 1 | A |
| 2 | B |
| NULL | C |
| 3 | D |
| NULL | D |
| NULL | E |
| 4 | F |
---------------
And I want to transform this table into a table that replace nulls by consecutive negative ids:
| id | desc |
|-------------|
| 1 | A |
| 2 | B |
| -1 | C |
| 3 | D |
| -2 | D |
| -3 | E |
| 4 | F |
---------------
Anyone knows how can I do this in hive?
Below approach works
select coalesce(id,concat('-',ROW_NUMBER() OVER (partition by id))) as id,desc from database_name.table_name;

postgresql : self join with array

My question is about forming Postgres SQL query for below use case
Approach#1
I have a table like below where I generate the same uuid across different types(a,b,c,d) like mapping different types.
+----+------+-------------+
| id | type | master_guid |
+----+------+-------------+
| 1 | a | uuid-1 |
| 2 | a | uuid-2 |
| 3 | a | uuid-3 |
| 4 | a | uuid-4 |
| 5 | a | uuid-5 |
| 6 | b | uuid-1 |
| 7 | b | uuid-2 |
| 8 | b | uuid-3 |
| 9 | b | uuid-6 |
| 10 | c | uuid-1 |
| 11 | c | uuid-2 |
| 12 | c | uuid-3 |
| 13 | c | uuid-6 |
| 14 | c | uuid-7 |
| 15 | d | uuid-6 |
| 16 | d | uuid-2 |
+----+------+-------------+
Approach#2
I have a created two tables for id to type and then id to master_guid, like below
table1:
+----+------+
| id | type |
+----+------+
| 1 | a |
| 2 | a |
| 3 | a |
| 4 | a |
| 5 | a |
| 6 | b |
| 7 | b |
| 8 | b |
| 9 | b |
| 10 | c |
| 11 | c |
| 12 | c |
| 13 | c |
| 14 | c |
| 15 | d |
| 16 | d |
+----+------+
table2
+----+-------------+
| id | master_guid |
+----+-------------+
| 1 | uuid-1 |
| 2 | uuid-2 |
| 3 | uuid-3 |
| 4 | uuid-4 |
| 5 | uuid-5 |
| 6 | uuid-1 |
| 7 | uuid-2 |
| 8 | uuid-3 |
| 9 | uuid-6 |
| 10 | uuid-1 |
| 11 | uuid-2 |
| 12 | uuid-3 |
| 13 | uuid-6 |
| 14 | uuid-7 |
| 15 | uuid-6 |
| 16 | uuid-2 |
+----+-------------+
I want to get output like below with both approaches:
+----+------+--------+------------+
| id | type | uuid | mapped_ids |
+----+------+--------+------------+
| 1 | a | uuid-1 | [6,10] |
| 2 | a | uuid-2 | [7,11] |
| 3 | a | uuid-3 | [8,12] |
| 4 | a | uuid-4 | null |
| 5 | a | uuid-5 | null |
+----+------+--------+------------+
I have tried self-joins with array_agg on ids and grouping based on uuid but not able to get the desired output.
Use below query to populate data:
Approach#1
insert into table1 values
(1,'a','uuid-1'),
(2,'a','uuid-2'),
(3,'a','uuid-3'),
(4,'a','uuid-4'),
(5,'a','uuid-5'),
(6,'b','uuid-1'),
(7,'b','uuid-2'),
(8,'b','uuid-3'),
(9,'b','uuid-6'),
(10,'c','uuid-1'),
(11,'c','uuid-2'),
(12,'c','uuid-3'),
(13,'c','uuid-6'),
(14,'c','uuid-7'),
(15,'d','uuid-6'),
(16,'d','uuid-2')
Approach#2
insert into table1 values
(1,'a'),
(2,'a'),
(3,'a'),
(4,'a'),
(5,'a'),
(6,'b'),
(7,'b'),
(8,'b'),
(9,'b'),
(10,'c'),
(11,'c'),
(12,'c'),
(13,'c'),
(14,'c'),
(15,'d'),
(16,'d')
insert into table2 values
(1,'uuid-1'),
(2,'uuid-2'),
(3,'uuid-3'),
(4,'uuid-4'),
(5,'uuid-5'),
(6,'uuid-1'),
(7,'uuid-2'),
(8,'uuid-3'),
(9,'uuid-6'),
(10,'uuid-1'),
(11,'uuid-2'),
(12,'uuid-3'),
(13,'uuid-6'),
(14,'uuid-7'),
(15,'uuid-6'),
(16,'uuid-2')
demo: db<>fiddle
Using window function ARRAY_AGG allows you to aggregate your ids per groups (in your case the groups are your uuids)
SELECT
id, type, master_guid as uuid,
array_agg(id) OVER (PARTITION BY master_guid) as mapped_ids
FROM table1
ORDER BY id
Result:
| id | type | uuid | mapped_ids |
|----|------|--------|------------|
| 1 | a | uuid-1 | 10,6,1 |
| 2 | a | uuid-2 | 16,2,7,11 |
| 3 | a | uuid-3 | 8,3,12 |
| 4 | a | uuid-4 | 4 |
| 5 | a | uuid-5 | 5 |
| 6 | b | uuid-1 | 10,6,1 |
| 7 | b | uuid-2 | 16,2,7,11 |
| 8 | b | uuid-3 | 8,3,12 |
| 9 | b | uuid-6 | 15,13,9 |
| 10 | c | uuid-1 | 10,6,1 |
| 11 | c | uuid-2 | 16,2,7,11 |
| 12 | c | uuid-3 | 8,3,12 |
| 13 | c | uuid-6 | 15,13,9 |
| 14 | c | uuid-7 | 14 |
| 15 | d | uuid-6 | 15,13,9 |
| 16 | d | uuid-2 | 16,2,7,11 |
These arrays currently contain also the id of the current row (mapped_ids of id = 1 contains the 1). This can be corrected by remove this element with array_remove:
SELECT
id, type, master_guid as uuid,
array_remove(array_agg(id) OVER (PARTITION BY master_guid), id) as mapped_ids
FROM table1
ORDER BY id
Result:
| id | type | uuid | mapped_ids |
|----|------|--------|------------|
| 1 | a | uuid-1 | 10,6 |
| 2 | a | uuid-2 | 16,7,11 |
| 3 | a | uuid-3 | 8,12 |
| 4 | a | uuid-4 | |
| 5 | a | uuid-5 | |
| 6 | b | uuid-1 | 10,1 |
| 7 | b | uuid-2 | 16,2,11 |
| 8 | b | uuid-3 | 3,12 |
| 9 | b | uuid-6 | 15,13 |
| 10 | c | uuid-1 | 6,1 |
| 11 | c | uuid-2 | 16,2,7 |
| 12 | c | uuid-3 | 8,3 |
| 13 | c | uuid-6 | 15,9 |
| 14 | c | uuid-7 | |
| 15 | d | uuid-6 | 13,9 |
| 16 | d | uuid-2 | 2,7,11 |
Now for example id=4 contains an empty array instead of a NULL value. This can be achieved by using the NULLIF function. This gives NULL if both parameters are equal, else it gives out the first parameter.
SELECT
id, type, master_guid as uuid,
NULLIF(
array_remove(array_agg(id) OVER (PARTITION BY master_guid), id),
'{}'::int[]
) as mapped_ids
FROM table1
ORDER BY id
Result:
| id | type | uuid | mapped_ids |
|----|------|--------|------------|
| 1 | a | uuid-1 | 10,6 |
| 2 | a | uuid-2 | 16,7,11 |
| 3 | a | uuid-3 | 8,12 |
| 4 | a | uuid-4 | (null) |
| 5 | a | uuid-5 | (null) |
| 6 | b | uuid-1 | 10,1 |
| 7 | b | uuid-2 | 16,2,11 |
| 8 | b | uuid-3 | 3,12 |
| 9 | b | uuid-6 | 15,13 |
| 10 | c | uuid-1 | 6,1 |
| 11 | c | uuid-2 | 16,2,7 |
| 12 | c | uuid-3 | 8,3 |
| 13 | c | uuid-6 | 15,9 |
| 14 | c | uuid-7 | (null) |
| 15 | d | uuid-6 | 13,9 |
| 16 | d | uuid-2 | 2,7,11 |
Try this:
select
t1.id, t1.type, t1.master_guid, array_agg (distinct t2.id)
from
table1 t1
left join table1 t2 on
t1.master_guid = t2.master_guid and
t1.id != t2.id
group by
t1.id, t1.type, t1.master_guid
I don't come up with exactly the same results you listed, but I thought it was close enought that maybe there was a mistaken expectation on your side or only a small error on mine... either way, a potential starting point.
-- EDIT --
For approach #2, I think you just need to add an inner join to Table2 to get the GUID:
select
t1.id, t1.type, t2.master_guid,
array_agg (t2a.id)
from
table1 t1
join table2 t2 on t1.id = t2.id
left join table2 t2a on
t2.master_guid = t2a.master_guid and
t2a.id != t1.id
where
t1.type = 'a'
group by
t1.id, t1.type, t2.master_guid