Spark : converting Array[Byte] data to RDD or DataFrame - scala

I have data in the form of Array[Byte] which I want to convert into Spark RDD or DataFrame so that I can write my data directly into a Google bucket in the form of a file. I am not able to write Array[Byte] data into Google bucket directly. So looking for this conversion.
My below code is able to write data into Local FS, but not Google bucket
val encrypted = encrypt(original, readPublicKey(pubKey), outFile, true, true)
val dfis = new FileOutputStream(outFile)
dfis.write(encrypted)
dfis.close()
def encrypt(clearData: Array[Byte], encKey: PGPPublicKey, fileName: String, withIntegrityCheck: Boolean, armor: Boolean): Array[Byte] = {
...
}
So how can I convert Array[Byte] data to RDD or DataFrame? I am using Scala.

just use .toDF() or .toDF().rdd
scala> val arr: Array[Byte] = Array(192.toByte, 168.toByte, 1.toByte, 4.toByte)
arr: Array[Byte] = Array(-64, -88, 1, 4)
scala> val df = arr.toSeq.toDF()
df: org.apache.spark.sql.DataFrame = [value: tinyint]
scala> df.show()
+-----+
|value|
+-----+
| -64|
| -88|
| 1|
| 4|
+-----+
scala> df.printSchema()
root
|-- value: byte (nullable = false)

Related

How to wrap multiple sql functions into a UDF in Spark?

I am working with Spark 2.3.2.
On one column within my Dataframe I am performing many spark.sql.functions sequentually. How can I wrap this sequence of functions into a user-defined-function (UDF) to make it reusable?
Here is my example focusing on the one column "columnName". First I am creating my test data:
val testSchema = new StructType()
.add("columnName", new StructType()
.add("2020-11", LongType)
.add("2020-12", LongType)
)
val testRow = Seq(Row(Row(1L, 2L)))
val testRDD = spark.sparkContext.parallelize(testRow)
val testDF = spark.createDataFrame(testRDD, testSchema)
testDF.printSchema()
/*
root
|-- columnName: struct (nullable = true)
| |-- 2020-11: long (nullable = true)
| |-- 2020-12: long (nullable = true)
*/
testDF.show(false)
/*
+----------+
|columnName|
+----------+
|[1, 2] |
+----------+
*/
And here is the sequence of applied Spark SQL functions (just as an example):
val testResult = testDF
.select(explode(split(regexp_replace(to_json(col("columnName")), "[\"{}]", ""), ",")).as("result"))
I am failing to create a UDF "myUDF", such that I can get the same result when calling
val testResultWithUDF = testDF.select(myUDF(col("columnName"))
This is what I "would like" to do:
def parseAndExplode(spalte: Column): Column = {
explode(split(regexp_replace(to_json(spalte), "[\"{}]", ""), ",")
}
val myUDF = udf(parseAndExplode _)
testDF.withColumn("udf_result", myUDF(col("columnName"))).show(false)
but it is throwing an Exception:
Schema for type org.apache.spark.sql.Column is not supported
java.lang.UnsupportedOperationException: Schema for type org.apache.spark.sql.Column is not supported
Also tried with using a Row as input parameter but then again failed trying to apply built-in SQL functions.
There is no need to use an udf here. explode, split and most other functions from org.apache.spark.sql.functions return already an object of type Column.
def parseAndExplode(spalte: Column): Column = {
explode(split(regexp_replace(to_json(spalte), "[\"{}]", ""), ","))
}
testDF.withColumn("udf_result",parseAndExplode('columnName)).show(false)
prints
+----------+----------+
|columnName|udf_result|
+----------+----------+
|[1, 2] |2020-11:1 |
|[1, 2] |2020-12:2 |
+----------+----------+

Replace seperator in Array[long] in the Spark dataframe

I'm reading a JSON file into a spark data frame in Scala. I have a JSON field like
"areaGlobalIdList":[2389,3,2,1,2147,2142,2518]
Spark is automatically inferring the datatype of this field as Array[long]. I tried concat_ws, but it seems only works with array[string]. When I tried converting this to array[string], the output is showing as
scala> val cmrdd = sc.textFile("/user/nkthn/cm.json")
scala> val cmdf = sqlContext.read.json(cmrdd)
scala> val dfResults = cmdf.select($"areaGlobalIdList".cast(StringType)).withColumn("AREAGLOBALIDLIST", regexp_replace($"areaGlobalIdList" , ",", "." ))
scala> dfResults.show(20,false)
+------------------------------------------------------------------+
|AREAGLOBALIDLIST |
+------------------------------------------------------------------+
|org.apache.spark.sql.catalyst.expressions.UnsafeArrayData#6364b584|
+------------------------------------------------------------------+
I'm expecting the output to be
[2389.3.2.1.2147.2142.2518]
Any assistance is greatly helpful.
Given the schema of the areaGlobalIdList column as
|-- areaGlobalIdList: array (nullable = true)
| |-- element: long (containsNull = false)
You can achieve this with simple udf function as
import org.apache.spark.sql.functions._
val concatWithDot = udf((array: collection.mutable.WrappedArray[Long]) => array.mkString("."))
df.withColumn("areaGlobalIdList", concatWithDot($"areaGlobalIdList")).show(false)

create a spark dataframe from a nested json file in scala [duplicate]

This question already has an answer here:
How to access sub-entities in JSON file?
(1 answer)
Closed 5 years ago.
I have a json file that looks like this
{
"group" : {},
"lang" : [
[ 1, "scala", "functional" ],
[ 2, "java","object" ],
[ 3, "py","interpreted" ]
]
}
I tried to create a dataframe using
val path = "some/path/to/jsonFile.json"
val df = sqlContext.read.json(path)
df.show()
when I run this I get
df: org.apache.spark.sql.DataFrame = [_corrupt_record: string]
How do we create a df based on contents of "lang" key? I do not care about group{} all I need is, pull data out of "lang" and apply case class like this
case class ProgLang (id: Int, lang: String, type: String )
I have read this post Reading JSON with Apache Spark - `corrupt_record` and understand that each record needs to be on a newline but in my case I cannot change the file structure
The json format is wrong. The the json api of sqlContext is reading it as corrupt record. Correct form is
{"group":{},"lang":[[1,"scala","functional"],[2,"java","object"],[3,"py","interpreted"]]}
and supposing you have it in a file ("/home/test.json"), then you can use following method to get the dataframe you want
import org.apache.spark.sql.functions._
import sqlContext.implicits._
val df = sqlContext.read.json("/home/test.json")
val df2 = df.withColumn("lang", explode($"lang"))
.withColumn("id", $"lang"(0))
.withColumn("langs", $"lang"(1))
.withColumn("type", $"lang"(2))
.drop("lang")
.withColumnRenamed("langs", "lang")
.show(false)
You should have
+---+-----+-----------+
|id |lang |type |
+---+-----+-----------+
|1 |scala|functional |
|2 |java |object |
|3 |py |interpreted|
+---+-----+-----------+
Updated
If you don't want to change your input json format as mentioned in your comment below, you can use wholeTextFiles to read the json file and parse it as below
import sqlContext.implicits._
import org.apache.spark.sql.functions._
val readJSON = sc.wholeTextFiles("/home/test.json")
.map(x => x._2)
.map(data => data.replaceAll("\n", ""))
val df = sqlContext.read.json(readJSON)
val df2 = df.withColumn("lang", explode($"lang"))
.withColumn("id", $"lang"(0).cast(IntegerType))
.withColumn("langs", $"lang"(1))
.withColumn("type", $"lang"(2))
.drop("lang")
.withColumnRenamed("langs", "lang")
df2.show(false)
df2.printSchema
It should give you dataframe as above and schema as
root
|-- id: integer (nullable = true)
|-- lang: string (nullable = true)
|-- type: string (nullable = true)
As of Spark 2.2 you can use multiLine option to deal with the case of multi-line JSONs.
scala> spark.read.option("multiLine", true).json("jsonFile.json").printSchema
root
|-- lang: array (nullable = true)
| |-- element: array (containsNull = true)
| | |-- element: string (containsNull = true)
Before Spark 2.2 see How to access sub-entities in JSON file? or Read multiline JSON in Apache Spark.

RDD[Array[String]] to Dataframe

I am new to Spark and Hive and my goal is to load a delimited(lets say csv) to Hive table. After a bit of reading I found out that the path to load the data into Hive is csv->dataframe->Hive.(Please correct me if I am wrong).
CSV:
1,Alex,70000,Columbus
2,Ryan,80000,New York
3,Johny,90000,Banglore
4,Cook, 65000,Glasgow
5,Starc, 70000,Aus
I read the csv file be using below command:
val csv =sc.textFile("employee_data.txt").map(line => line.split(",").map(elem => elem.trim))
csv: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[29] at map at <console>:39
Now I am trying to convert this RDD to Dataframe and using below code:
scala> val df = csv.map { case Array(s0, s1, s2, s3) => employee(s0, s1, s2, s3) }.toDF()
df: org.apache.spark.sql.DataFrame = [eid: string, name: string, salary: string, destination: string]
employee is a case class and I am using it as a schema definition.
case class employee(eid: String, name: String, salary: String, destination: String)
However when I do df.show I am getting below error:
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 10.0 failed 4 times, most recent failure: Lost task
0.3 in stage 10.0 (TID 22, user.hostname): scala.MatchError: [Ljava.lang.String;#88ba3cb (of class
[Ljava.lang.String;)
I was expecting a dataframe as a output. I know why I might be getting this error because the values in RDD are stored in Ljava.lang.String;#88ba3cb format and I need to use mkString to get the actual values but I am not able to find how to do it. I appreciate your time.
If you fix your case class then it should work:
scala> case class employee(eid: String, name: String, salary: String, destination: String)
defined class employee
scala> val txtRDD = sc.textFile("data.txt").map(line => line.split(",").map(_.trim))
txtRDD: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[30] at map at <console>:24
scala> txtRDD.map{case Array(s0, s1, s2, s3) => employee(s0, s1, s2, s3)}.toDF.show
+---+-----+------+-----------+
|eid| name|salary|destination|
+---+-----+------+-----------+
| 1| Alex| 70000| Columbus|
| 2| Ryan| 80000| New York|
| 3|Johny| 90000| Banglore|
| 4| Cook| 65000| Glasgow|
| 5|Starc| 70000| Aus|
+---+-----+------+-----------+
Otherwise you could convert the String to an Int:
scala> case class employee(eid: Int, name: String, salary: String, destination: String)
defined class employee
scala> val df = txtRDD.map{case Array(s0, s1, s2, s3) => employee(s0.toInt, s1, s2, s3)}.toDF
df: org.apache.spark.sql.DataFrame = [eid: int, name: string ... 2 more fields]
scala> df.show
+---+-----+------+-----------+
|eid| name|salary|destination|
+---+-----+------+-----------+
| 1| Alex| 70000| Columbus|
| 2| Ryan| 80000| New York|
| 3|Johny| 90000| Banglore|
| 4| Cook| 65000| Glasgow|
| 5|Starc| 70000| Aus|
+---+-----+------+-----------+
However the best solution would be to use spark-csv (which would treat the salary as an Int as well).
Also note that the error was thrown when you ran df.show because everything was being lazily evaluated up until that point. df.show is an action which will cause all of the queued transformations to execute (see this article for more).
Use map on array elements, not on array:
val csv = sc.textFile("employee_data.txt")
.map(line => line
.split(",")
.map(e => e.map(_.trim))
)
val df = csv.map { case Array(s0, s1, s2, s3) => employee(s0, s1, s2, s3) }.toDF()
But, why you are reading CSV and then converting RDD to DF? Spark 1.5 already can read CSV via spark-csv package:
val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true")
.option("inferSchema", "true")
.option("delimiter", ";")
.load("employee_data.txt")
As you said in your comment, your case class employee, which should be named Employee, receives an Int as first argument of its constructor, but you are passing a String. Thus, you should convert it to an Int before instantiating or modify your case defining eid as a String.

Read HBase in Scala - it.nerdammer

I would like to read HBase data in a Spark stream code for looking up and further enhancement of streaming data. I am using spark-hbase-connector_2.10-1.0.3.jar.
In my code the following line is successful
val docRdd =
sc.hbaseTable[(Option[String], Option[String])]("hbase_customer_profile")
.select("id","gender").inColumnFamily("data")
docRdd.count returns the right count.
docRdd is of type
HBaseReaderBuilder(org.apache.spark.SparkContext#3a49e5,hbase_customer_profile,Some(data),WrappedArray(id,
gender),None,None,List())
How can I read all the rows in id, gender columns please. Also how can I convert docRdd into a data frame so that SparkSQL can be used.
You can read all rows from the RDD using
docRdd.collect().foreach(println)
To convert the RDD to a DataFrame you could define a case class:
case class Customer(rowKey: String, id: Option[String], gender: Option[String])
I have added the row key to the case class; that's not strictly necessary, so if you don't need it, you can omit it.
Then map over the RDD:
// Row key, id, gender
type Record = (String, Option[String], Option[String])
val rdd =
sc.hbaseTable[Record]("customers")
.select("id","gender")
.inColumnFamily("data")
.map(r => Customer(r._1, r._2, r._3))
and then - based on the case class - convert the RDD to a DataFrame
import sqlContext.implicits._
val df = rdd.toDF()
df.show()
df.printSchema()
The output from spark-shell looks like this:
scala> df.show()
+---------+----+------+
| rowKey| id|gender|
+---------+----+------+
|customer1| 1| null|
|customer2|null| f|
|customer3| 3| m|
+---------+----+------+
scala> df.printSchema()
root
|-- rowKey: string (nullable = true)
|-- id: string (nullable = true)
|-- gender: string (nullable = true)