apply a function to list in Scala - scala

I'm trying to learn Scala.
I ask for help to understand loop foreach
I have a function, it reads only last csv from path. But it works when I point only one way:
val path = "src/main/resources/historical_novel/"
def getLastFile(path: String, spark: SparkSession): String = {
val hdfs = ...}
but how can I apply this function to the list such as
val paths: List[String] = List(
"src/main/resources/historical_novel/",
"src/main/resources/detective/",
"src/main/resources/adventure/",
"src/main/resources/horror/")
I want to get such result:
src/main/resources/historical_novel/20221027.csv
src/main/resources/detective/20221026.csv
src/main/resources/adventure/20221026.csv
src/main/resources/horror/20221027.csv
I create df with column (path), then apply function through WithColumn and it is work,
but I want to do it with foreach, understand it.

let's say your function is like this
def f(s: String): Unit = {}
you can simply do this
paths.foreach(p => f(p))
After your edit, I think you may want use map, a function that can transform a collection to another collection. like this
val result = paths.map(p => getLastFile(p, yourSparkSession))

foreach applies a function you define or provide on each element in a Collection.
The simplest example is to print each path to the console
paths.foreach(path => println(path))
To apply a series of functions as you describe you can use {} in the foreach body and call multiple functions.
paths.foreach(path => {
val file = loadFile(path)
writeToDataBase(file)
})

Related

How to create a map for (key - image name // value - image-file) in Scala

def getListOfImageNames(dir: String): List[String] = {
val names = new File(dir)
names.listFiles.filter(_.isFile)
.map(_.getName).toList
}
def getListOfImages(dir: String): List[String] = {
val files = new File(dir)
files.listFiles.filter(_.isFile)
.filter(_.getName.endsWith(".png"))
.filter(_.getName.endsWith(".jpg"))
.map(_.getPath).toList
}
I have a directory where I have different photos, small size, large size and I have already managed to write methods which: one of them only pulls out the names of the photos and the other the photos. How can I now combine them into a map, for example, then calculate their resolution using the method, if the photo is larger than, for example, 500x500, add a prefix to the name and save it in the X folder. Do you have any ideas? I'm not experienced in Scala, but I like the language very much.
As I got you need to get map of image name to image path. You can achieve it like so:
def getImagesMap(dirPath: String): Map[String, String] = {
val directory = new File(dirPath)
directory.listFiles.collect{
case file if file.isFile &&
(file.getName.endsWith(".png") ||
file.getName.endsWith(".jpg")) =>
file.getName -> file.getPath
}.toMap
}
here I use collect function. It's like combination of map and filter functions. Inside collect a pattern matching expression. If file matches pattern matching it will evaluate pair creation: file name to file path. Otherwise file just will be filtered. After I use toMap for conversion Array[(String, String)] to Map[String, String]. You can read more about collect here.

Scio Apache Beam - How to properly separate a pipeline code?

I have a pipeline with a set of PTransforms and my method is getting very long.
I'd like to write my DoFns and my composite transforms in a separate package and use them back in my main method. With python it's pretty straightforward, how can I achieve that with Scio? I don't see any example of doing that. :(
withFixedWindows(
FIXED_WINDOW_DURATION,
options = WindowOptions(
trigger = groupedWithinTrigger,
timestampCombiner = TimestampCombiner.END_OF_WINDOW,
accumulationMode = AccumulationMode.ACCUMULATING_FIRED_PANES,
allowedLateness = Duration.ZERO
)
)
.sumByKey
// How to write this in an another file and use it here?
.transform("Format Output") {
_
.withWindow[IntervalWindow]
.withTimestamp
}
If I understand your question correctly, you want to bundle your map, groupBy, ... transformations in a separate package, and use them in your main pipeline.
One way would be to use applyTransform, but then you would end up using PTransforms, which are not scala-friendly.
You can simply write a function that receives an SCollection and returns the transformed one, like:
def myTransform(input: SCollection[InputType]): Scollection[OutputType] = ???
But if you intend to write your own Source/Sink, take a look at the ScioIO class
You can use map function to map your elements example.
Instead of passing a lambda, you can pass a method reference from another class
Example .map(MyClass.MyFunction)
I think one way to solve this could be to define an object in another package and then create a method in that object that would have the logic required for your transformation. For example:
def main(cmdlineArgs: Array[String]): Unit = {
val (sc, args) = ContextAndArgs(cmdlineArgs)
val defaulTopic = "tweets"
val input = args.getOrElse("inputTopic", defaulTopic)
val output = args("outputTopic")
val inputStream: SCollection[Tweet] = sc.withName("read from pub sub").pubsubTopic(input)
.withName("map to tweet class").map(x => {parse(x).extract[Tweet]})
inputStream
.flatMap(sentiment.predict) // object sentiment with method predict
}
object sentiment {
def predict(tweet: Tweet): Option[List[TweetSentiment]] = {
val data = tweet.text
val emptyCase = Some("")
Some(data) match {
case `emptyCase` => None
case Some(v) => Some(entitySentimentFile(data)) // I used another method, //not defined
}
}
Please also this link for an example given in the Scio examples

How to define a function in scala for flatMap

New to Scala, I want to try to rewrite some code in flatMap by calling a function instead of writing the whole process inside "()".
The original code is like:
val longForm = summary.flatMap(row => {
/*This is the code I want to replace with a function*/
val metric = row.getString(0)
(1 until row.size).map{i=>
(metric,schema(i).name,row.getString(i).toDouble)
})
}/*End of function*/)
The function I wrote is:
def tfunc(line:Row):List[Any] ={
val metric = line.getString(0)
var res = List[Any]
for (i<- 1 to line.size){
/*Save each iteration result as a List[tuple], then append to the res List.*/
val tup = (metric,schema(i).name,line.getString(i).toDouble)
val tempList = List(tup)
res = res :: tempList
}
res
}
The function did not passed compilation with the following error:
error: missing argument list for method apply in object List
Unapplied methods are only converted to functions when a function type is expected.
You can make this conversion explicit by writing apply _ or apply(_) instead of apply.
var res = List[Any]
What is wrong with this function?
And for flatMap, is it the write way to return the result as a List?
You haven't explained why you want to replace that code block. Is there a particular goal you're after? There are many, many, different ways that block could be rewritten. How can we know which would be better at meeting you requirements?
Here's one approach.
def tfunc(line :Row) :List[(String,String,Double)] ={
val metric = line.getString(0)
List.tabulate(line.tail.length){ idx =>
(metric, schema(idx+1).name, line.getString(idx+1).toDouble)
}
}

Scala iterator on pattern match

I need help to iterate this piece of code written in Spark-Scala with DataFrame. I'm new on Scala, so I apologize if my question may seem trivial.
The function is very simple: Given a dataframe, the function casts the column if there is a pattern matching, otherwise select all field.
/* Load sources */
val df = sqlContext.sql("select id_vehicle, id_size, id_country, id_time from " + working_database + carPark);
val df2 = df.select(
df.columns.map {
case id_vehicle # "id_vehicle" => df(id_vehicle).cast("Int").as(id_vehicle)
case other => df(other)
}: _*
)
This function, with pattern matching, works perfectly!
Now I have a question: Is there any way to "iterate" this? In practice I need a function that given a dataframe, an Array[String] of column (column_1, column_2, ...) and another Array[String] of type (int, double, float, ...), return to me the same dataframe with the right cast at right position.
I need help :)
//Your supplied code fits nicely into this function
def castOnce(df: DataFrame, colName: String, typeName: String): DataFrame = {
val colsCasted = df.columns.map{
case colName => df(colName).cast(typeName).as(colName)
case other => df(other)
}
df.select(colsCasted:_ *)
}
def castMany(df: DataFrame, colNames: Array[String], typeNames: Array[String]): DataFrame = {
assert(colNames.length == typeNames.length, "The lengths are different")
val colsWithTypes: Array[(String, String)] = colNames.zip(typeNames)
colsWithTypes.foldLeft(df)((cAndType, newDf) => castOnce(newDf, cAndType._1, cAndType._2))
}
When you have a function that you just need to apply many times to the same thing a fold is often what you want.
The above code zips the two arrays together to combine them into one.
It then iterates through this list applying your function each time to the dataframe and then applying the next pair to the resultant dataframe etc.
Based on your edit I filled in the function above. I don't have a compiler so I'm not 100% sure its correct. Having written it out I am also left questioning my original approach. Below is a better way I believe but I am leaving the previous one for reference.
def(df: DataFrame, colNames: Array[String], typeNames: Array[String]): DataFrame = {
assert(colNames.length == typeNames.length, "The lengths are different")
val nameToType: Map[String, String] = colNames.zip(typeNames).toMap
val newCols= df.columns.map{dfCol =>
nameToType.get(dfCol).map{newType =>
df(dfCol).cast(newType).as(dfCol)
}.getOrElse(df(dfCol))
}
df.select(newCols:_ *)
}
The above code creates a map of column name to the desired type.
Then foreach column in the dataframe it looks the type up in the Map.
If the type exists we cast the column to that new type. If the column does not exist in the Map then we default to the column from the DataFrame directly.
We then select these columns from the DataFrame

Formatting the join rdd - Apache Spark

I have two key value pair RDD, I join the two rdd's and I saveastext file, here is the code:
val enKeyValuePair1 = rows_filter6.map(line => (line(8) -> (line(0),line(4),line(10),line(5),line(6),line(14),line(1),line(9),line(12),line(13),line(3),line(15),line(7),line(16),line(2),line(14))))
val enKeyValuePair = DATA.map(line => (line(0) -> (line(2),line(3))))
val final_res = enKeyValuePair1.leftOuterJoin(enKeyValuePair)
val output = final_res.saveAsTextFile("C:/out")
my output is as follows:
(534309,((17999,5161,45005,00000,XYZ,,29.95,0.00),None))
How can i get rid of all the parenthesis?
I want my output as follows:
534309,17999,5161,45005,00000,XYZ,,29.95,0.00,None
When outputing to a text file Spark will just use the toString representation of the element in the RDD. If you want control over the format, then, tou can do one last transform of the data to a String before the call to saveAsTextFile.
Luckily the tuples that arise form using the Spark API can be pulled apart using destructuring. In your example I'd do:
val final_res = enKeyValuePair1.leftOuterJoin(enKeyValuePair)
val formatted = final_res.map { tuple =>
val (f1,((f2,f3,f4,f5,f6,f7,f8,f9),f10)) = tuple
Seq(f1, f2, f3, f4, f5, f6, f7, f8, f9, f10).mkString(",")
}
formatted.saveAsTextFile("C:/out")
The first val line will take the tuple that is passed into the map function and assign the components to the values on the left. The second line creates a temporary Seq with the fields in the order you want displayed and then invokes mkString(",") to join the fields using a comma.
In cases with fewer fields or you're just hacking away at a problem on the REPL, a slight alternate to the above can also be used by using pattern matching on the partial function passed to map.
simpleJoinedRdd.map { case (key,(left,right)) => s"$key,$left,$right"}}
While that does allow you do make it a single line expression it can throw Exceptions if the data in the RDD don't match the pattern provided, as opposed to the earlier example where the compiler will complain if the tuple parameter cannot be destructured into the expected form.
You can do something like this:
import scala.collection.JavaConversions._
val output = sc.parallelize(List((534309,((17999,5161,45005,1,"XYZ","",29.95,0.00),None))))
val result = output.map(p => p._1 +=: p._2._1.productIterator.toBuffer += p._2._2)
.map(p => com.google.common.base.Joiner.on(", ").join(p.iterator))
I used guava to format string but there is porbably scala way of doing this.
do a flatmap before saving. Or, you can write a simple format function and use it in map.
Adding a bit code, just to show how it can be done. function formatOnDemand can be anything
test = sc.parallelize([(534309,((17999,5161,45005,00000,"XYZ","",29.95,0.00),None))])
print test.collect()
print test.map(formatOnDemand).collect()
def formatOnDemand(t):
out=[]
out.append(t[0])
for tok in t[1][0]:
out.append(tok)
out.append(t[1][1])
return out
>>>
[(534309, ((17999, 5161, 45005, 0, 'XYZ', '', 29.95, 0.0), None))]
[[534309, 17999, 5161, 45005, 0, 'XYZ', '', 29.95, 0.0, None]]