How to emulate the array_join() method in spark 2.2 - scala

For example, if I have a dataframe like this:
|sex| state_name| salary| www|
|---|------------------|-------|----|
| M| Ohio,California| 400|3000|
| M| Oakland| 70| 300|
| M|DF,Tbilisi,Calgary| 200|3500|
| M| Belice| 200|3000|
| m| Sofia,Helsinki| 800|7000|
I need to concatenate as a String the comma separated values in the "state_name" column with a delimiter specified by me. Also, I need to put a string at the end and the beginning of the generated string (the opposite of a strip() method or function).
For example, if I want an output like this:
|cool_city |
|--------------------------------|
|[***Ohio<-->California***] |
|[***Oakland***] |
|[***DF<-->Tbilisi<-->Calgary***]|
|[***Belice***] |
|[***Sofia<-->Helsinki***] |
The solution that I've already coded with Spark 3.1.1 is this:
df.select(concat(lit("[***"),
array_join(split(col("state_name"),","),"<-->"),lit("***]")).as("cool_city")).show()
The problem is that the computer where this will be running is using Spark 2.1.1 and the array_join() method isn't supported in this version (it's a pretty big project and upgrading the Spark version isn't over the table). Im pretty new using scala/spark and I don't know if there's another function that could help me emulating the array_join() use or if someone knows where to find the way to code a UDF with the same usefulness.
I would greatly appreciate your help!

I don't know Scala, but try this:
df.select(concat(lit("[***"),
concat_ws("<-->", split(col("state_name"), ",")),
lit("***]")).as("cool_city")).show()
UPDATE
Avoiding column split:
df.select(concat(lit("[***"),
regexp_replace(col("state_name"), ",", "<-->"),
lit("***]")).as("cool_city")).show()

Related

What is the canonical way to create objects from rows of a Spark dataframe?

I am using Apache Zeppelin (0.9.0) and Scala (2.11.12). I want to pull some data out of a dataframe and store it to InfluxDB, later to be visualized in Grafana, and cannot figure it out. I'm trying a naive approach with a foreach loop. The idea is to iterate through all rows, extract the columns I need, create a Point object (from this InfluxDB client library), and either send it to InfluxDB or add it to a list and then send all the points in bulk, after the loop.
The dataframe looks like this:
+---------+---------+-------------+-----+
|timestamp|sessionId|eventDuration|speed|
+---------+---------+-------------+-----+
| 1| ses1| 0.0| 50|
| 2| ses1| 1.0| 50|
| 3| ses1| 2.0| 50|
I've tried to do what is described above:
import scala.collection.mutable.ListBuffer
import spark.implicits._
import org.apache.spark.sql._
import com.paulgoldbaum.influxdbclient._
import scala.concurrent.ExecutionContext.Implicits.global
val influxdb = InfluxDB.connect("172.17.0.4", 8086)
val database = influxdb.selectDatabase("test")
var influxData = new ListBuffer[Point]()
dfAnalyseReport.foreach(row =>
{
val point = Point("acceleration")
.addTag("speedBin", row.getLong(3).toString)
.addField("eventDuration", row.getDouble(2))
influxData += point
}
)
val influxDataList = influxData.toList
database.bulkWrite(influxDataList)
The only thing I am getting here is a cryptic java.lang.ClassCastException with no additional info, neither in the notebook output nor in the logs of the Zeppelin Docker container. The error seems to be somewhere in the foreach, as it appears even when I comment out the last two lines.
I also tried adapting approach 1 from this answer, using a case class for columns, but to no avail. I got it to run without an error, but the resulting list was empty. Unfortunately I deleted that attempt. I could reconstruct it if necessary, but I've spent so much time on this I'm fairly certain I have some fundamental misunderstanding on how this should be done.
One further question: I also tried writing each Point to the DB as it was constructed (instead of in bulk). The only difference is that instead of appending to the ListBuffer I did a database.write(point) operation. When done outside of the loop with a dummy point, it goes through without a problem - the data ends up in InfluxDB - but inside the loop it results in org.apache.spark.SparkException: Task not serializable
Could someone point me in the right way? How should I tackle this?
I'd do it with the RDD map method and collect the results to a list:
val influxDataList = dfAnalyseReport.rdd.map(
row => Point("acceleration")
.addTag("speedBin", row.getInt(3).toString)
.addField("eventDuration", row.getDouble(2))
).collect.toList

Scala/Spark - Find total number of value in row based on a key

I have a large text file which contains the page views of some Wikimedia projects. (You can find it here if you're really interested) Each line, delimited by a space, contains the statistics for one Wikimedia page. The schema looks as follows:
<project code> <page title> <num hits> <page size>
In Scala, using Spark RDDs or Dataframes, I wish to compute the total number of hits for each project, based on the project code.
So for example for projects with the code "zw", I would like to find all the rows that begin with project code "zw", and add up their hits. Obviously this should be done for all project codes simultaneously.
I have looked at functions like aggregateByKey etc, but the examples I found don't go into enough detail, especially for a file with 4 fields. I imagine it's some kind of MapReduce job, but how exactly to implement it is beyond me.
Any help would be greatly appreciated.
First, you have to read the file in as a Dataset[String]. Then, parse each string into a tuple, so that it can be easily converted to a Dataframe. Once you have a Dataframe, a simple .GroupBy().agg() is enough to finish the computation.
import org.apache.spark.sql.functions.sum
val df = spark.read.textFile("/tmp/pagecounts.gz").map(l => {
val a = l.split(" ")
(a(0), a(2).toLong)
}).toDF("project_code", "num_hits")
val agg_df = df.groupBy("project_code")
.agg(sum("num_hits").as("total_hits"))
.orderBy($"total_hits".desc)
agg_df.show(10)
The above snippet shows the top 10 project codes by total hits.
+------------+----------+
|project_code|total_hits|
+------------+----------+
| en.mw| 5466346|
| en| 5310694|
| es.mw| 695531|
| ja.mw| 611443|
| de.mw| 572119|
| fr.mw| 536978|
| ru.mw| 466742|
| ru| 463437|
| es| 400632|
| it.mw| 400297|
+------------+----------+
It is certainly also possible to do this with the older API as an RDD map/reduce, but you lose many of the optimizations that Dataset/Dataframe api brings.

Delete Unicode value in output of Spark 1.6 using Scala

The file generated from API contains data like below
col1,col2,col3
503004,(d$üíõ$F|'.h*Ë!øì=(.î;      ,.¡|®!®3-2-704
when i am reading in spark it is appearing like this. i am using case class to read from RDD then convert it to DataFrame using .todf.
503004,������������,������������������������3-2-704
but i am trying to get value like
503004,dFh,3-2-704-- only alpha-numeric value is retained.
i am using spark 1.6 and scala.
Please share your suggestion
#this ca be achieved by using the regex_replace
val df = spark.sparkContext.parallelize(List(("503004","d$üíõ$F|'.h*Ë!øì=(.î; ,.¡|®!®","3-2-704"))).toDF("col1","col2","col3")
df.withColumn("col2_new", regexp_replace($"col2", "[^a-zA-Z]", "")).show()
Output:
+------+--------------------+-------+--------+
| col1| col2| col3|col2_new|
+------+--------------------+-------+--------+
|503004|d$üíõ$F|'.h*Ë!øì=...|3-2-704| dFh|
+------+--------------------+-------+--------+

store elements to hashet from file scala

i am playing a little bit with scala and i want to open a text file, read each line and save some of the fields in a hashset.
The input file will be something like this:
1 2 3
2 4 5
At first, i am just trying to store the first element of each column to a variable but nothing seems to happen.
My code is:
var id = 0
val textFile = sc.textFile(inputFile);
val nline = textFile.map(_.split(" ")).foreach(r => id = r(0))
I am using spark because i want to process bigger amount of data later, so i'm trying to get used to it. I am printing id but i get only 0.
Any ideas?
A couple of things:
First, inside map and foreach you are running code out on your executors. The id variable you defined is on the driver. You can pass variables to your executors using closures, but not the other way around. If you think about it, when you have 10 executors running through records simultaneously which value of ID would you expect to be returned?
Edit - foreach is an action
I mistakenly called foreach not an action below. It is an action that just lets your run arbitrary code against your rows. It is useful if you have your own code to save the result to a different data store for example. foreach does not bring any data back to the driver, so it does not help with your case.
End edit
Second, all of the spark methods you called are transformations, you haven't called an action yet. Spark doesn't actually run any code until an action is called. Instead it just builds a graph of the transformations you want to happen until you specify an action. Actions are things that require materializing a result either to provide data back to the driver or save them out somewhere like HDFS.
In your case, to get values back you will want to use an action like "collect" which returns all the values from the RDD back to the driver. However, you should only do this when you know there aren't going to be many values returned. If you are operating on 100 million records you do not want to try and pull them all back to the driver! Generally speaking you will want to only pull data back to the driver after you have processed and reduced it.
i am just trying to store the first element of each column to a
variable but nothing seems to happen.
val file_path = "file.txt"
val ds = ss.read.textFile(file_path)
val ar = ds.map(x => x.split(" ")).first()
val (x,y,z) = (ar(0),ar(1),ar(2))
You can access the first value of the columns with x,y,z as above.
With your file, x=1, y=2, z=3.
val ar1 = ds.map(x => x.split(" "))
val final_ds = ar.select($"value".getItem(0).as("col1") , $"value".getItem(1).as("col2") , $"value".getItem(2).as("col3")) // you can name the columns as like this
Output :
+----+----+----+
|col1|col2|col3|
+----+----+----+
| 1| 2| 3|
| 2| 4| 5|
+----+----+----+
You can run any kind of sql's on final_ds like a small sample below.
final_ds.select("col1","col2").where(final_ds.col("col1") > 1).show()
Output:
+----+----+
|col1|col2|
+----+----+
| 2| 4|
+----+----+

Spark 2.1.0 structure streaming with local CSV file

Just for learning the new Spark structure streaming with data, I had tried such experiment but no sure if I did anything wrong with the streaming function.
First, I started with something static and just use the simple text (csv) file coming with Spark 2.1.0:
val df = spark.read.format("csv").load(".../spark2/examples/src/main/resources/people.txt")
df.show()
and I can get such reasonable output (under Zepplin).
+-------+---+
| _c0|_c1|
+-------+---+
|Michael| 29|
| Andy| 30|
| Justin| 19|
+-------+---+
and following the example, I just modified the codes to read the same file and supplied schema
val userSchema = new StructType().add("name", "string").add("age", "integer")
val csvDF = spark
.readStream
.schema(userSchema) // Specify schema of the csv files
.format("csv")
.load(".../spark2/examples/src/main/resources/people.csv")
And no error message, so I was thinking to write the data to memory and see the results with the following codes:
val outStream = csvDF.writeStream
.format("memory")
.queryName("logs")
.start()
sql("select * from logs").show(truncate = false)
However, with no error message, I kept get "empty output" with
+----+---+
|name|age|
+----+---+
+----+---+
The codes were tested under Zeppelin 0.7 and I am not sure if I missed anything here. Meanwhile, I had tried the example with from Apache Spark 2.1.0 official site with $nc -lk 9999 and it ran very well.
May I learn if I did something wrong?
[modified & tested]
I tried and replicated the same file people.txt to people1.csv
peopele2.csv people3.csv under one .../csv/ folder
val csvDF = spark.readStream.schema(userSchema).csv("/somewhere/csv")
csvDF.groupBy("name").count().writeStream.outputMode("complete").format("console").start().awaitTermination()
and I got this:
-------------------------------------------
Batch: 0
-------------------------------------------
+-------+-----+
| name|count|
+-------+-----+
|Michael| 3|
| Andy| 3|
| Justin| 3|
+-------+-----+
Therefore, I might not think it is a data readstream() issue ...
The file name is people.txt, not people.csv. Spark will throw an error saying "Path does not exist". I just used Spark Shell to verify it.
The input path should be a directory. It doesn't make sense to use a file since this is a streaming query.
You have 2 differences in the code:
1. The non-working one has output mode of "append" (default) but the working one has output mode of "complete".
2. The non-working selects records without aggregation but the working one has groupBy aggregation.
I suggest you switch to complete output mode and do a groupBy count to see if it fixes the problem.