How to use Scala format and substitution interpolation together? - scala

I am new to scala and spark and have a requirement where i want to use both format and substitution in a single println statement.
Here is the code:
val results = minTempRdd.collect()
for(result <- results.sorted){
val station = result._1
val temp = result._2
println(f" StId $station Temp $temp%.2f F")
}
where results is an RDD having structure (stationId, Temperature).
Now i want to convert this code into one liner. I tried the following code:
val results = minTempRdd.collect()
results.foreach(x => println(" stId "+x._1+" temp = "+x._2))
It works fine, but i am not able to format the second value in tuple here.
Any suggestions, how can we achieve this?

The first way is to use curly brackets inside interpolation, which allow to pass arbitrary expressions instead of variables:
println(f" StId ${result._1} Temp ${result._2}%.2fF")
The second way is to unpack the tuple:
for ((station, temp) <- results.sorted)
println(f" StId $station Temp $temp%.2fF")
Or:
results.sorted.foreach { case (station, temp) =>
println(" stId "+x._1+" temp = "+x._2)
}

Related

Scala: Convert a string to string array with and without split given that all special characters except "(" an ")" are allowed

I have an array
val a = "((x1,x2),(y1,y2),(z1,z2))"
I want to parse this into a scala array
val arr = Array(("x1","x2"),("y1","y2"),("z1","z2"))
Is there a way of directly doing this with an expr() equivalent ?
If not how would one do this using split
Note : x1 x2 x3 etc are strings and can contain special characters so key would be to use () delimiters to parse data -
Code I munged from Dici and Bogdan Vakulenko
val x2 = a.getString(1).trim.split("[\()]").grouped(2).map(x=>x(0).trim).toArray
val x3 = x2.drop(1) // first grouping is always null dont know why
var jmap = new java.util.HashMap[String, String]()
for (i<-x3)
{
val index = i.lastIndexOf(",")
val fv = i.slice(0,index)
val lv = i.substring(index+1).trim
jmap.put(fv,lv)
}
This is still suceptible to "," in the second string -
Actually, I think regex are the most convenient way to solve this.
val a = "((x1,x2),(y1,y2),(z1,z2))"
val regex = "(\\((\\w+),(\\w+)\\))".r
println(
regex.findAllMatchIn(a)
.map(matcher => (matcher.group(2), matcher.group(3)))
.toList
)
Note that I made some assumptions about the format:
no whitespaces in the string (the regex could easily be updated to fix this if needed)
always tuples of two elements, never more
empty string not valid as a tuple element
only alphanumeric characters allowed (this also would be easy to fix)
val a = "((x1,x2),(y1,y2),(z1,z2))"
a.replaceAll("[\\(\\) ]","")
.split(",")
.sliding(2)
.map(x=>(x(0),x(1)))
.toArray

replace multiple occurrence of duplicate string in Scala with empty

I have a string as
something,'' something,nothing_something,op nothing_something,'' cat,cat
I want to achieve my output as
'' something,op nothing_something,cat
Is there any way to achieve it?
If I understand your requirement correctly, here's one approach with the following steps:
Split the input string by "," and create a list of indexed-CSVs and convert it to a Map
Generate 2-combinations of the indexed-CSVs
Check each of the indexed-CSV pairs and capture the index of any CSV which is contained within the other CSV
Since the CSVs corresponding to the captured indexes are contained within some other CSV, removing these indexes will result in remaining indexes we would like to keep
Use the remaining indexes to look up CSVs from the CSV Map and concatenate them back to a string
Here is sample code applying to a string with slightly more general comma-separated values:
val str = "cats,a cat,cat,there is a cat,my cat,cats,cat"
val csvIdxList = (Stream from 1).zip(str.split(",")).toList
val csvMap = csvIdxList.toMap
val csvPairs = csvIdxList.combinations(2).toList
val csvContainedIdx = csvPairs.collect{
case List(x, y) if x._2.contains(y._2) => y._1
case List(x, y) if y._2.contains(x._2) => x._1
}.
distinct
// csvContainedIdx: List[Int] = List(3, 6, 7, 2)
val csvToKeepIdx = (1 to csvIdxList.size) diff csvContainedIdx
// csvToKeepIdx: scala.collection.immutable.IndexedSeq[Int] = Vector(1, 4, 5)
val strDeduped = csvToKeepIdx.map( csvMap.getOrElse(_, "") ).mkString(",")
// strDeduped: String = cats,there is a cat,my cat
Applying the above to your sample string something,'' something,nothing_something,op nothing_something would yield the expected result:
strDeduped: String = '' something,op nothing_something
First create an Array of words separated by commas using split command on the given String, and do other operations using filter and mkString as below:
s.split(",").filter(_.contains(' ')).mkString(",")
In Scala REPL:
scala> val s = "something,'' something,nothing_something,op nothing_something"
s: String = something,'' something,nothing_something,op nothing_something
scala> s.split(",").filter(_.contains(' ')).mkString(",")
res27: String = '' something,op nothing_something
As per Leo C comment, I tested it as below with some other String:
scala> val s = "something,'' something anything anything anything anything,nothing_something,op op op nothing_something"
s: String = something,'' something anything anything anything anything,nothing_something,op op op nothing_something
scala> s.split(",").filter(_.contains(' ')).mkString(",")
res43: String = '' something anything anything anything anything,op op op nothing_something

How to concat multiple columns in a data frame using Scala

I am trying concat multiple columns in a data frame . My column list are present in a variable. I am trying to pass that variable into concat function but not able to do that.
Ex: base_tbl_columns contain list of columns and I am using below code to select all the columns mentioned in the varibale .
scala> val base_tbl_columns = scd_table_keys_df.first().getString(5).split(",")
base_tbl_columns: Array[String] = Array(acct_nbr, account_sk_id, zip_code, primary_state, eff_start_date, eff_end_date, load_tm, hash_key, eff_flag)
val hist_sk_df_ld = hist_sk_df.select(base_tbl_columns.head,base_tbl_columns.tail: _*)
Similarly, I have one more list whcih I want to use for concatenation. But there the concat function is not taking the .head and .tail argument.
scala> val hash_key_cols = scd_table_keys_df.first().getString(4)
hash_key_cols: String = primary_state,zip_code
Here I am hard coding the value primary_state and zip_code.
.withColumn("hash_key_col",concat($"primary_state",$"zip_code"))
Here I am passing the variable hash_key_cols .
.withColumn("hash_key_col",concat(hash_key_cols ))
I was able t do this in python by using the code below.
hist_sk_df = hist_tbl_df.join(broadcast(hist_tbl_lkp_df) ,primary_key_col,'inner' ).withColumn("eff_start_date",lit(load_dt))**.withColumn('hash_key_col',F.concat(*hash_key_cols))**.withColumn("hash_key",hash_udf('hash_key_col')).withColumn("eff_end_date",lit(eff_close_dt)).withColumn("load_tm",lit(load_tm)).withColumn("eff_flag",lit(eff_flag_curr))
Either:
val base_tbl_columns: Array[String] = ???
df.select(concat(base_tbl_columns.map(c => col(c)): _*))
or:
df.select(expr(s"""concat(${base_tbl_columns.mkstring(",")})"""))

How do I remove empty dataframes from a sequence of dataframes in Scala

How do I remove empty data frames from a sequence of data frames? In this below code snippet, there are many empty data frames in twoColDF. Also another question for the below for loop, is there a way that I can make this efficient? I tried rewriting this to below line but didn't work
//finalDF2 = (1 until colCount).flatMap(j => groupCount(j).map( y=> finalDF.map(a=>a.filter(df(cols(j)) === y)))).toSeq.flatten
var twoColDF: Seq[Seq[DataFrame]] = null
if (colCount == 2 )
{
val i = 0
for (j <- i + 1 until colCount) {
twoColDF = groupCount(j).map(y => {
finalDF.map(x => x.filter(df(cols(j)) === y))
})
}
}finalDF = twoColDF.flatten
Given a set of DataFrames, you can access each DataFrame's underlying RDD and use isEmpty to filter out the empty ones:
val input: Seq[DataFrame] = ???
val result = input.filter(!_.rdd.isEmpty())
As for your other question - I can't understand what your code tries to do, but I'd first try to convert it into something more functional (remove use of vars and imperative conditionals). If I'm guessing the meaning of your inputs, here's something that might be equivalent to what you're trying to do:
var input: Seq[DataFrame] = ???
// map of column index to column values -
// for each combination we'd want a new DF where that column has that value
// I'm assuming values are Strings, can be anything else
val groupCount: Map[Int, Seq[String]] = ???
// for each combination of DF + column + value - produce the filtered DF where this column has this value
val perValue: Seq[DataFrame] = for {
df <- input
index <- groupCount.keySet
value <- groupCount(index)
} yield df.filter(col(df.columns(index)) === value)
// remove empty results:
val result: Seq[DataFrame] = perValue.filter(!_.rdd.isEmpty())

How to convert values from a file to a Map in spark-scala?

I have my values in a file as comma separated. Now, i want this data to be converted into a key value pairs(Maps). I know that we can split the values and store in a Array like below.
val prop_file = sc.textFile("/prop_file.txt")
prop_file.map(_.split(",").map(s => Array(s)))
Is there any way to store the data as Map in spark-scala ?
Assuming that each line of your file contain 2 values where first word is Key and next is value, separated by space: -
A 1
B 2
C 3
Something like this can be done: -
val file = sc.textFile("/prop_file.txt")
val words = file.flatMap(x => createDataMap(x))
And here is your function - createDataMap
def createDataMap(data:String): Map[String, String] = {
val array = data.split(",")
//Creating the Map of values
val dataMap = Map[String, String](
(array(0) -> array(1)),
(array(2) -> array(3))
)
return dataMap
}
Next for retrieving the key/ values from the RDD you can leverage following operations: -
//This will print all elements of RDD
words.foreach(f=>println(f))
//Or You can filter the elements too.
words.filter(f=>f._1.equals("A"))
Sumit, I have used the below code to retrieve the value for a particular key and it worked.
val words = file.flatMap(x => createDataMap(x)).collectAsMap
val valueofA = props("A")
print(valueofA)
This gives me 1 as a result