Remove spaces from all columns using spark - scala

I have Dataframe with some columns:
+------+-------------+------+---------------+--------------+
|CustId| Name|Salary| State| Country|
+------+-------------+------+---------------+--------------+
| 1| Brad Eason| 100|New South Wales| Australia|
| 2|Tracy Hopkins| 200| England|United Kingdom|
| 3| Todd Boyes| 300| England|United Kingdom|
| 4| Roy Phan| 400| Minnesota| United States|
| 5| Harold Ryan| 500| Washington| United States|
+------+-------------+------+---------------+--------------+
To replace all the space of a string column with _, I have made the following changes:
get all the string type columns to avoid exception while performing String operation.
foreach string type column replace SPACE with _.
import org.apache.spark.sql.types.StringType
val trimColumns=customers.schema.fields.filter(_.dataType.isInstanceOf[StringType])
val arrayOfDf = trimColumns.map(f=>{
customers.withColumn(f.name,regexp_replace(col(f.name), " ", "_"))
})
The above code results in an array of dataframes which have valid data of string column in each element.
scala> arrayOfDf(1).select("Name").show(4)
+-------------+
| Name|
+-------------+
| Brad_Eason|
|Tracy_Hopkins|
| Todd_Boyes|
| Roy_Phan|
+-------------+
Now I need to pick the first columns from the first element, second columns from the second element of the array, and so on...
Is there any better way for this approach?

instead of arrayOfDf logic, use foldleft like below.
val outputDf = trimColumns.foldLeft(df)((agg, tf) =>
agg.withColumn(tf.name,regexp_replace(col(tf.name), " ", "_"))
)
Output will be:
+------+-------------+------+---------------+--------------+
|CustId| Name|Salary| State| Country|
+------+-------------+------+---------------+--------------+
| 1| Brad_Eason| 100|New South_Wales| Australia|
| 2|Tracy_Hopkins| 200| England|United_Kingdom|
| 3| Todd_Boyes| 300| England|United_Kingdom|
| 4| Roy_Phan| 400| Minnesota| United_States|
| 5| Harold_Ryan| 500| Washington| United_States|
+------+-------------+------+---------------+--------------+

Related

Spark adding indexes to dataframe and append other dataset that doesn't have index

I have a dataset that has column userid and index values.
+---------+--------+
| userid | index|
+---------+--------+
| user1| 1|
| user2| 2|
| user3| 3|
| user4| 4|
| user5| 5|
| user6| 6|
| user7| 7|
| user8| 8|
| user9| 9|
| user10| 10|
+---------+--------+
I want to append a new data frame to it and add an index to the newly added columns.
The userid is unique and the existing data frame will not have the Dataframe 2 user ids.
+----------+
| userid |
+----------+
| user11|
| user21|
| user41|
| user51|
| user64|
+----------+
The expected output with newly added userid and index
+---------+--------+
| userid | index|
+---------+--------+
| user1| 1|
| user2| 2|
| user3| 3|
| user4| 4|
| user5| 5|
| user6| 6|
| user7| 7|
| user8| 8|
| user9| 9|
| user10| 10|
| user11| 11|
| user21| 12|
| user41| 13|
| user51| 14|
| user64| 15|
+---------+--------+
Is it possible to achive this by passing a max index value and start index for second Dataframe from given index value.
If the userid has some ordering, then you can use the rownumber function. Even if it does not have, then you can add an id using monotonically_increasing_id(). For now I assume that userid can be ordered. Then you can do this:
from pyspark.sql import functions as F
from pyspark.sql.window import Window
df_merge = df1.select('userid').union(df2.select('userid'))
w=Window.orderBy('userid')
df_result = df_merge.withColumn('indexid',F.row_number().over(w))
EDIT : After discussions in comment.
#%% Test data and imports
import pyspark.sql.functions as F
from pyspark.sql import Window
df = sqlContext.createDataFrame([('a',100),('ab',50),('ba',300),('ced',60),('d',500)],schema=['userid','index'])
df1 = sqlContext.createDataFrame([('fgh',100),('ff',50),('fe',300),('er',60),('fi',500)],schema=['userid','dummy'])
#%%
#%% Merge the two dataframes, with a null columns as the index
df1=df1.withColumn('index', F.lit(None))
df_merge = df.select(df.columns).union(df1.select(df.columns))
#%%Define a window to arrange the newly added rows at the last and order them by userid
#%% The user id, even though random strings, can be ordered
w= Window.orderBy(F.col('index').asc_nulls_last(),F.col('userid'))# if possible add a partition column here, otherwise all your data will come in one partition, consider salting
#%% For the newly added rows, define index as the maximum value + increment of number of rows in main dataframe
df_final = df_merge.withColumn("index_new",F.when(~F.col('index').isNull(),F.col('index')).otherwise((F.last(F.col('index'),ignorenulls=True).over(w))+F.sum(F.lit(1)).over(w)))
#%% If number of rows in main dataframe is huge, then add an offset in the above line
df_final.show()
+------+-----+---------+
|userid|index|index_new|
+------+-----+---------+
| ab| 50| 50|
| ced| 60| 60|
| a| 100| 100|
| ba| 300| 300|
| d| 500| 500|
| er| null| 506|
| fe| null| 507|
| ff| null| 508|
| fgh| null| 509|
| fi| null| 510|
+------+-----+---------+

Compare two dataframes and update the values

I have two dataframes like following.
val file1 = spark.read.format("csv").option("sep", ",").option("inferSchema", "true").option("header", "true").load("file1.csv")
file1.show()
+---+-------+-----+-----+-------+
| id| name|mark1|mark2|version|
+---+-------+-----+-----+-------+
| 1| Priya | 80| 99| 0|
| 2| Teju | 10| 5| 0|
+---+-------+-----+-----+-------+
val file2 = spark.read.format("csv").option("sep", ",").option("inferSchema", "true").option("header", "true").load("file2.csv")
file2.show()
+---+-------+-----+-----+-------+
| id| name|mark1|mark2|version|
+---+-------+-----+-----+-------+
| 1| Priya | 80| 99| 0|
| 2| Teju | 70| 5| 0|
+---+-------+-----+-----+-------+
Now I am comparing two dataframes and filtering out the mismatch values like this.
val columns = file1.schema.fields.map(_.name)
val selectiveDifferences = columns.map(col => file1.select(col).except(file2.select(col)))
selectiveDifferences.map(diff => {if(diff.count > 0) diff.show})
+-----+
|mark1|
+-----+
| 10|
+-----+
I need to add the extra row into the dataframe, 1 for the mismatch value from the dataframe 2 and update the version number like this.
file1.show()
+---+-------+-----+-----+-------+
| id| name|mark1|mark2|version|
+---+-------+-----+-----+-------+
| 1| Priya | 80| 99| 0|
| 2| Teju | 10| 5| 0|
| 3| Teju | 70| 5| 1|
+---+-------+-----+-----+-------+
I am struggling to achieve the above step and it is my expected output. Any help would be appreciated.
You can get your final dataframe by using except and union as following
val count = file1.count()
import org.apache.spark.sql.expressions._
import org.apache.spark.sql.functions._
file1.union(file2.except(file1)
.withColumn("version", lit(1)) //changing the version
.withColumn("id", (row_number.over(Window.orderBy("id")))+lit(count)) //changing the id number
)
lit, row_number and window functions are used to generate the id and versions
Note : use of window function to generate the new id makes the process inefficient as all the data would be collected in one executor for generating new id

Group rows that match sub string in a column using scala

I have a fol df:
Zip | Name | id |
abc | xyz | 1 |
def | wxz | 2 |
abc | wex | 3 |
bcl | rea | 4 |
abc | txc | 5 |
def | rfx | 6 |
abc | abc | 7 |
I need to group all the names that contain 'x' based on same Zip using scala
Desired Output:
Zip | Count |
abc | 3 |
def | 2 |
Any help is highly appreciated
As #Shaido mentioned in the comment above, all you need is filter, groupBy and aggregation as
import org.apache.spark.sql.functions._
fol.filter(col("Name").contains("x")) //filtering the rows that has x in the Name column
.groupBy("Zip") //grouping by Zip column
.agg(count("Zip").as("Count")) //counting the rows in each groups
.show(false)
and you should have the desired output
+---+-----+
|Zip|Count|
+---+-----+
|abc|3 |
|def|2 |
+---+-----+
You want to groupBy bellow data frame.
+---+----+---+
|zip|name| id|
+---+----+---+
|abc| xyz| 1|
|def| wxz| 2|
|abc| wex| 3|
|bcl| rea| 4|
|abc| txc| 5|
|def| rfx| 6|
|abc| abc| 7|
+---+----+---+
then you can simply use groupBy function with passing column parameter and followed by count will give you the result.
val groupedDf: DataFrame = df.groupBy("zip").count()
groupedDf.show()
// +---+-----+
// |zip|count|
// +---+-----+
// |bcl| 1|
// |abc| 4|
// |def| 2|
// +---+-----+

Merging and aggregating dataframes using Spark Scala

I have a dataset, after transformation using Spark Scala (1.6.2). I got the following two dataframes.
DF1:
|date | country | count|
| 1872| Scotland| 1|
| 1873| England | 1|
| 1873| Scotland| 1|
| 1875| England | 1|
| 1875| Scotland| 2|
DF2:
| date| country | count|
| 1872| England | 1|
| 1873| Scotland| 1|
| 1874| England | 1|
| 1875| Scotland| 1|
| 1875| Wales | 1|
Now from above two dataframes, I want to get aggregate by date per country. Like following output. I tried using union and by joining but not able to get desired results.
Expected output from the two dataframes above:
| date| country | count|
| 1872| England | 1|
| 1872| Scotland| 1|
| 1873| Scotland| 2|
| 1873| England | 1|
| 1874| England | 1|
| 1875| Scotland| 3|
| 1875| Wales | 1|
| 1875| England | 1|
Kindly help me get solution.
The best way is to perform an union and then an groupBy by the two columns, then with the sum, you can specify which column to add up:
df1.unionAll(df2)
.groupBy("date", "country")
.sum("count")
Output:
+----+--------+----------+
|date| country|sum(count)|
+----+--------+----------+
|1872|Scotland| 1|
|1875| England| 1|
|1873| England| 1|
|1875| Wales| 1|
|1872| England| 1|
|1874| England| 1|
|1873|Scotland| 2|
|1875|Scotland| 3|
+----+--------+----------+
Using the DataFrame API, you can use a unionAll followed by a groupBy to achive this.
DF1.unionAll(DF2)
.groupBy("date", "country")
.agg(sum($"count").as("count"))
This will first put all rows from the two dataframes into a single dataframe. Then, then by grouping on the date and country columns it's possible to get the aggregate sum of the count column by date per country as asked. The as("count") part renames the aggregated column to count.
Note: In newer Spark versions (read version 2.0+), unionAll is deprecated and is replaced by union.

How to append column values in Spark SQL?

I have the below table:
+-------+---------+---------+
|movieId|movieName| genre|
+-------+---------+---------+
| 1| example1| action|
| 1| example1| thriller|
| 1| example1| romance|
| 2| example2|fantastic|
| 2| example2| action|
+-------+---------+---------+
What I am trying to achieve is to append the genre values together where the id and name are the same. Like this:
+-------+---------+---------------------------+
|movieId|movieName| genre |
+-------+---------+---------------------------+
| 1| example1| action|thriller|romance |
| 2| example2| action|fantastic |
+-------+---------+---------------------------+
Use groupBy and collect_list to get a list of all items with the same movie name. Then combine these to a string using concat_ws (if the order is important, first use sort_array). Small example with given sample dataframe:
val df2 = df.groupBy("movieId", "movieName")
.agg(collect_list($"genre").as("genre"))
.withColumn("genre", concat_ws("|", sort_array($"genre")))
Gives the result:
+-------+---------+-----------------------+
|movieId|movieName|genre |
+-------+---------+-----------------------+
|1 |example1 |action|thriller|romance|
|2 |example2 |action|fantastic |
+-------+---------+-----------------------+