I am trying to convert a dataframe of multiple case classes to an rdd of these multiple cases classes. I cant find any solution. This wrappedArray has drived me crazy :P
For example, assuming I am having the following:
case class randomClass(a:String,b: Double)
case class randomClass2(a:String,b: Seq[randomClass])
case class randomClass3(a:String,b:String)
val anRDD = sc.parallelize(Seq(
(randomClass2("a",Seq(randomClass("a1",1.1),randomClass("a2",1.1))),randomClass3("aa","aaa")),
(randomClass2("b",Seq(randomClass("b1",1.2),randomClass("b2",1.2))),randomClass3("bb","bbb")),
(randomClass2("c",Seq(randomClass("c1",3.2),randomClass("c2",1.2))),randomClass3("cc","Ccc"))))
val aDF = anRDD.toDF()
Assuming that I am having the aDF how can I get the anRDD???
I tried something like this just to get the second column but it was giving an error:
aDF.map { case r:Row => r.getAs[randomClass3]("_2")}
You can convert indirectly using Dataset[randomClass3]:
aDF.select($"_2.*").as[randomClass3].rdd
Spark DatataFrame / Dataset[Row] represents data as the Row objects using mapping described in Spark SQL, DataFrames and Datasets Guide Any call to getAs should use this mapping.
For the second column, which is struct<a: string, b: string>, it would be a Row as well:
aDF.rdd.map { _.getAs[Row]("_2") }
As commented by Tzach Zohar to get back a full RDD you'll need:
aDF.as[(randomClass2, randomClass3)].rdd
I don't know the scala API but have you considered the rdd value?
Maybe something like :
aDR.rdd.map { case r:Row => r.getAs[randomClass3]("_2")}
Related
I am a newbie in scala. I will try to be as clear as possible.I have the following code:
case class Session (bf: Array[File])
case class File(s: s, a: Option[a], b: Option[b], c: Option[c])
case class s(s1:Int, s2:String)
case class a(a1:Int, a2:String)
case class b(b1:Int, b2:String)
case class c(c1:Int, c2:String)
val x = Session(...) // some values here, many session objects grouped in a dataset collection i.e. Dataset[Sessions]
I want to know how to create dataframes from a Dataset[Sessions]. I do not
know how to manipulate such a complex structure.
how to create a dataframe from Dataset[sessions] only containing the custom
object "a".
Thank you
A Spark DataSet works much like a regular Scala collection. It has a toDF() operation to create a DataFrame out of it. Now you just need to extract the right data out of it using different transformations.
flatMap it into a DataSet of File
filter every File for a non-empty a
map every remaining File to a
call toDF() to create a DataFrame
In code this would be:
val ds: DataSet[Session] = ...
ds.flatMap(_.bf)
.filter(_.a.isDefined)
.map(_.a.get)
.toDF()
In Scala you can also combine the filter and map to a collect, which would lead to the following code:
ds.flatMap(_.bf).collect({ case File(_, Some(a), _, _) => a }).toDF()
Say I have an dataframe which contains a column (called colA) which is a seq of row. I want to to append a new field to each record of colA. (And the new filed is associated with the former record, so I have to write an udf.)
How should I write this udf?
I have tried to write a udf, which takes colA as input, and output Seq[Row] where each record contains the new filed. But the problem is the udf cannot return Seq[Row]/ The exception is 'Schema for type org.apache.spark.sql.Row is not supported'.
What should I do?
The udf that I wrote:
val convert = udf[Seq[Row], Seq[Row]](blablabla...)
And the exception is java.lang.UnsupportedOperationException: Schema for type org.apache.spark.sql.Row is not supported
since spark 2.0 you can create UDFs which return Row / Seq[Row], but you must provide the schema for the return type, e.g. if you work with an Array of Doubles :
val schema = ArrayType(DoubleType)
val myUDF = udf((s: Seq[Row]) => {
s // just pass data without modification
}, schema)
But I cant really imagine where this is useful, I would rather return tuples or case classes (or Seq thereof) from the UDFs.
EDIT : It could be useful if your row contains more than 22 fields (limit of fields for tuples/case classes)
This is an old question, I just wanted to update it according to the new version of Spark.
Since Spark 3.0.0, the method that #Raphael Roth has mentioned is deprecated. Hence, you might get an AnalysisException. The reason is that the input closure using this method doesn't have type checking and the behavior might be different from what we expect in SQL when it comes to null values.
If you really know what you're doing, you need to set spark.sql.legacy.allowUntypedScalaUDF configuration to true.
Another solution is to use case class instead of schema. For example,
case class Foo(field1: String, field2: String)
val convertFunction: Seq[Row] => Seq[Foo] = input => {
input.map {
x => // do something with x and convert to Foo
}
}
val myUdf = udf(convertFunction)
I have a dataframe which have a complex column datatype of Arraytype>. For transforming this dataframe I have created udf which can consume this column using Array [case class] as parameter. The main bottle neck here is when I create case class according to stucttype, the structfield name contains special characters for example "##field". So I provide same name to case class like this way case class (##field) and attach this to udf parameter. After interpreted in spark udf definition change name of case class field to this "$hash$hashfield". When performing transform using this dataframe it is failing because of this miss match. Please help ...
Due JVM limitations Scala stores identifiers in encoded form and currently Spark can't map ##field to $hash$hashfield.
One possible solution is to extract fields manually from raw row (but you need to know order of the fields in df, you can use df.schema for that):
val myUdf = udf { (struct: Row) =>
// Pattern match struct:
struct match {
case Row(a: String) => Foo(a)
}
// .. or extract values from Row
val `##a` = struct.getAs[String](0)
}
I am reading in a file that has many spaces and need to filter out the space. Afterwards we need to convert it to a dataframe. Example input below.
2017123 ¦ ¦10¦running¦00000¦111¦-EXAMPLE
My solution to this was the following function which parses out all spaces and trims the file.
def truncateRDD(fileName : String): RDD[String] = {
val example = sc.textFile(fileName)
example.map(lines => lines.replaceAll("""[\t\p{Zs}]+""", ""))
}
However, I am not sure how to get it into a dataframe. sc.textFile returns a RDD[String]. I tried the case class way but the issue is we have 800 field schema, case class cannot go beyond 22.
I was thinking of somehow converting RDD[String] to RDD[Row] so I can use the createDataFrame function.
val DF = spark.createDataFrame(rowRDD, schema)
Any suggestions on how to do this?
First split/parse your strings into the fields.
rdd.map( line => parse(line)) where parse is some parsing function. It could be as simple as split but you may want something more robust. This will get you an RDD[Array[String]] or similar.
You can then convert to an RDD[Row] with rdd.map(a => Row.fromSeq(a))
From there you can convert to DataFrame wising sqlContext.createDataFrame(rdd, schema) where rdd is your RDD[Row] and schema is your schema StructType.
In your case simple way :
val RowOfRDD = truncateRDD("yourfilename").map(r => Row.fromSeq(r))
How to solve productarity issue if you are using scala 2.10 ?
However, I am not sure how to get it into a dataframe. sc.textFile
returns a RDD[String]. I tried the case class way but the issue is we
have 800 field schema, case class cannot go beyond 22.
Yes, There are some limitations like productarity but we can overcome...
you can do like below example for < versions 2.11 :
prepare a case class which extends Product and overrides methods.
like...
productArity():Int: This returns the size of the attributes. In our case, it's 33. So, our implementation looks like this:
productElement(n:Int):Any: Given an index, this returns the attribute. As protection, we also have a default case, which throws an IndexOutOfBoundsException exception:
canEqual (that:Any):Boolean: This is the last of the three functions, and it serves as a boundary condition when an equality check is being done against class:
Example implementation you can refer this Student case class which has 33 fields in it
Example student dataset description here
Generally what I am trying to achieve:
I think I would like to remove the case classes from the RDD, but keep the RDD, and am unsure how to do that.
Specificatlly what I am trying to do:
What I am trying to achieve is to turn each row of an RDD into json. But the json can only be a list a key:value pairs. When I turn it into json in it's current form I get
{"CCABINDeviceDataPartial":
{"Tran_Id":"1234weqr",
"TranData":{"Processor_Id":"qqq","Merchant_Id":"1234"},
"BillingAndShippingData":{"Billing_City":"MyCity","Billing_State":"State","Billing_Zip":"000000","Billing_Country":"MexiCanada","Shipping_City":"MyCity","Shipping_State":"State","Shipping_Zip":"000000","Shipping_Country":"USico"}
...
}
}
What I want is
{"Tran_Id":"1234weqr",
"Processor_Id":"qqq",
"Merchant_Id":"1234",
"Billing_City":"MyCity",
"Billing_State":"State",
"Billing_Zip":"000000",
"Billing_Country":"MexiCanada",
"Shipping_City":"MyCity",
"Shipping_State":"State",
"Shipping_Zip":"000000",
"Shipping_Country":"USico"
...
}
I have what I call a parent case class that looks like this:
case class CCABINDeviceDataPartial(Tran_Id: String, TranData: TranData,
BillingAndShippingData: BillingAndShippingData, AcquirerData: AcquirerData,
TimingData: TimingData, RBD_Tran_Id: String, DeviceData1: DeviceData1, ACS_Time: Long,
Payfone_Alias: String, TranStatusData: TranStatusData, Centurion_BIN_Class: String,
BankData: BankData, DeviceData2: DeviceData2, ACS_Host: String,
DeviceData3: DeviceData3, txn_status: String, Device_Type: String,
TranOutcome: TranOutcome, AcsData: AcsData, DateTimeData: DateTimeData)
Now TranData, BillingAndShippingData, AcquirerData, and some others are also case classes. I presume this was done to get around the 21 or 22 element limit on case classes. If you "unroll" everything there are 76 elements in total.
My only working idea is to break out the case classes into dataframes and then join them together one at a time. This seems a bit onerous and I am hoping that there is a way to just "flatten" the RDD. I have looked at the API documentation for RDDs but don't see anything that obvious.
Additional Notes
This is how I currently convert things to json.
First I convert the RDD to a dataframe with
def rddDistinctToTable(txnData: RDD[CCABINDeviceDataPartial], instanceSpark:SparkService,
tableName: String): DataFrame = {
import instanceSpark.sql.implicits._
val fullTxns = txnData.filter(x => x.Tran_Id != "0")
val uniqueTxns = rddToDataFrameHolder(fullTxns.distinct()).toDF()
uniqueTxns.registerTempTable(tableName)
return uniqueTxns
}
Then to convert to json and write to Elasticsearch with
sparkStringJsonRDDFunctions(uniqueTxns.toJSON)
.saveJsonToEs(instanceSpark.sc.getConf.get("es.resource"))
Quick and simple solution:
convert RDD to DataFrame
use select to flatten records (you can use dots to access nested objects like df.select("somecolumn.*", "another.nested.column"))
use write.json to write as JSON