How to read JSON in data frame column - scala

I'm reading a HDFS directory
val schema = spark.read.schema(schema).json("/HDFS path").schema
val df= spark.read.schema(schema).json ("/HDFS path")
Here selecting only PK and timestamp from JSON file
Val df2= df.select($"PK1",$"PK2",$"PK3" ,$"ts")
Then
Using windows function to get updated PK on the base of timestamp
val dfrank = df2.withColumn("rank",row_number().over(
Window.partitionBy($"PK1",$"PK2",$"PK3" ).orderBy($"ts".desc))
)
.filter($"rank"===1)
From this window function getting only updated primary keys & timestamp of updated JSON.
Now I have to add one more column where I want to get only JSON with updated PK and Timestamp
How I can do that
Trying below but getting wrong JSON instead of updated JSON
val df3= dfrank.withColumn("JSON",lit(dfrank.toJSON.first()))
Result shown in image.

Here, you convert the entire dataframe to JSON and collect it to the driver with toJSON (that's going to crash with a large dataframe) and add a column that contains a JSON version of the first row of the dataframe to your dataframe. I don't think this is what you want.
From what I understand, you have a dataframe and for each row, you want to create a JSON column that contains all of its columns. You could create a struct with all your columns and then use to_json like this:
val df3 = dfrank.withColumn("JSON", to_json(struct(df.columns.map(col) : _*)))

Related

How to drop first row from parquet file?

I have parquet file which contain two columns(id,feature).file consists of 14348 row.file
How i drop first row id,feature from file
code
val df = spark.read.format("parquet").load("file:///usr/local/spark/dataset/model/data/user/part-r-00000-7d55ba81-5761-4e36-b488-7e6214df2a68.snappy.parquet")
val header = df.first()
val data = df.filter(row => row != header)
data .show()
result seems as output
If you are trying to "ignore" the schema defined in the file, it is implicitly done once you read your file, using spark like:
spark.read.format("parquet").load(your_file)
If you are trying to only skip the first row on your DF and if you already know the id you can do: val filteredDF = originalDF.filter(s"id != '${excludeID}' "). If you don't know the id, you can use monotonically_increasing_id to tag it and then filter, similar like: filter spark dataframe based on maximum value of a column
You need to drop the first row based on id if you know that, else go for indexing approach i.e., assigning the row number and delete the first row.
I'm using Spark 2.4.0, and you could use the header option to the DataFrameReader call like so -
spark.read.format("csv").option("header", true).load(<path_to_file>)
Reference for the other options for DataFrameReader are here

Unique timestamp column for every record in the dataframe

I have a dataframe which needs to have a unique load timestamp column. no two records in the dataframe should have same value in this field.
I tried using inbuilt methods such CURRENT_TIMESTAMP etc but doesn't work. I even tried creating a udf to generate the timestamp as below
val generateUniqueTimestamp = udf(() => new SimpleDateFormat("yyyy-MM-dd HH:mm:ss:SSS").format(new java.util.Date()).toString)
var df = dataFrame.withColumn("LOAD_TS", generateUniqueTimestamp())
Lets say, three records in data frame, it should have an extra field with a timestamp
Actual result
rec1 ,2019-09-05 22:00:00:000
rec2 ,2019-09-05 22:00:00:000
rec3 ,2019-09-05 22:00:00:000
Expected result
rec1 ,2019-09-05 22:00:00:000
rec2 ,2019-09-05 22:00:00:001
rec3 ,2019-09-05 22:00:00:002
Below steps solved the issue for now, Might be a bad way to do it though.
Created a method and registered it with spark udf using.
spark.udf.register("uniquetimestamp", uniquetimestap(_: String))
Created an empty column using
.withColumn("LOAD_TS", lit(null: String))
Extracted all columns in dataframe and then iterated through them to generate SqlExpr. (all column goes in as it is expect for LOAD_TS
//(cast(uniquetimestamp(load_ts) as timestamp) as load_ts)
val df =dataframe.selectExpr(sqlCastingExpr.split(","): _*)

How can create a new DataFrame from a list?

Hello guys i have this function that gets the row Values from a DataFrame, converts them into a list and the makes a Dataframe from it.
//Gets the row content from the "content column"
val dfList = df.select("content").rdd.map(r => r(0).toString).collect.toList
val dataSet = sparkSession.createDataset(dfList)
//Makes a new DataFrame
sparkSession.read.json(dataSet)
What i need to do to make a list with other column values so i can have another DataFrame with the other columns values
val dfList = df.select("content","collection", "h").rdd.map(r => {
println("******ROW********")
println(r(0).toString)
println(r(1).toString)
println(r(2).toString) //These have the row values from the other
//columns in the select
}).collect.toList
thanks
Approach doesn't look right, you don't need to collect dataframe to just add new columns. Try adding columns to directly to dataframe using withColumn() withColumnRenamed() https://docs.azuredatabricks.net/spark/1.6/sparkr/functions/withColumn.html.
If you want to bring columns from another dataframe try joining. In any case it's not good idea to use collect as it will bring all your data to driver.

Capture and write string inside of dataframe using foreach row

Trying to capture and write a string value after substituting contents obtained from specific fields from each row of a dataframe using scala. But since it is deployed on cluster not able to capture any records. Can anyone provide a solution?
Assuming TEST_DB.finalresult has 2 fields input1 and input2:
val finalresult=spark.sql("select * from TEST_DB.finalresult")
finalResult.foreach { row =>
val param1=row.getAs("input1").asInstanceOf[String]
val param2=row.getAs("input2").asInstanceOf[String]
val string = """new values of param1 and param2 are -> """ + param1 + """,""" + param2
// how to append modified string to csv file continously for each microbatch in hdfs ??
}
In your code you create the wanted string variable but it is not being saved anywhere, hence you can't see the result.
You can potentially in each foreach execution open up the wanted csv file and append the new string, but I'd like to propose a different solution.
If you can, try to always use built-in functionality of Spark, since it is (usually) more optimised and better in handling null inputs. You can achieve the same by:
import org.apache.spark.sql.functions.{lit, concat, col}
val modifiedFinalResult = finalResult.select(
concat(
lit("new values of param1 and param2 are -> "),
col("input1"),
lit(","),
col("input2")
).alias("string")
)
In variable modifiedFinalResult you will have a spark dataframe with single column named string, which represents the exact same output as your variable string in your code. Afterwards you can save the dataframe directly as a single csv file (using the repartition functionality):
modifiedFinalResult.repartition(1).write.format("csv").save("path/to/your/csv/output")
PS: Also a suggestion for the future, try to avoid naming variables after data types.
UPDATE: Fixed the empty rows issue by using "concat_ws" instead of concat and coalesce to each fields. It seems some of the values which were null were transforming the entire concatenated string to null after the transformation. Nevertheless this solution works for now!

Getting max value out of a dataframe column of timestamp in scala/spark

I am working with a spark dataframe where it contains the entire timestamp values from the Column 'IMG_CREATED_DT'.I have used collectAsList() and toString() method to get the values as List and converting in to String. But I am not getting how to fetch the max value out of it.Please guide me on this.
val query_new =s"""(select IMG_CREATED_DT from
${conf.get(UNCAppConstants.DB2_SCHEMA)}.$table)"""
println(query_new)
val db2_op=ConnectionUtilities_v.createDataFrame(src_props,srcConfig.url,query_new)
val t3 = db2_op.select("IMG_CREATED_DT").collectAsList().toString
How to get the max value out of t3.
You can calculate the max value form dataframe itself. Try the following sample.
val t3 = db2_op.agg(max("IMG_CREATED_DT").as("maxVal")).take(1)(0).get(0)