HBase Table Retrieval Data - scala

If I am trying to retrieve data from an HBase table using this code:
val get = new Get(Bytes.toBytes(extracted.user.rowKey.toString))
val res = table.get(get)
I am not sure if the val res = table.get(get) line will return a result or not since a row with this row key: extracted.socialUser.socialUserConnectionId.toString passed to the Get constructor may not exist in the HBase table.
I am trying something like this:
val get = new Get(Bytes.toBytes(extracted.socialUser.socialUserConnectionId.toString))
val res = table.get(get)
if (!res) {
/* Create the row in the HBase table */
}
But it is giving me the error in that if statement saying: Expression of this type result doesn't convert to type Boolean. Is there a way I can solve this problem?

At first glance, it appears val res = table.get(get) would return a type Optional.
Given that, you should be calling res.isEmpty instead of !res
Edit:
Better still, you could use getOrElse instead of get :
val res = table.getOrElse{
// Create the row in the HBase table
}

Related

Matching Column name from Csv file in spark scala

I want to take headers (column name) from my csv file and the want to match with it my existing header.
I am using below code:
val cc = sparksession.read.csv(filepath).take(1)
Its giving me value like:
Array([id,name,salary])
and I have created one more static schema, which is giving me value like this:
val ss=Array("id","name","salary")
and then I'm trying to compare column name using if condition:
if(cc==ss){
println("matched")
} else{
println("not matched")
}
I guess due to [] and () mismatch its always going to else part is there any other way to compare these value without considering [] and ()?
First, for convenience, set the header option to true when reading the file:
val df = sparksession.read.option("header", true).csv(filepath)
Get the column names and define the expected column names:
val cc = df.columns
val ss = Array("id", "name", "salary")
To check if the two match (not considering the ordering):
if (cc.toSet == ss.toSet) {
println("matched")
} else {
println("not matched")
}
If the order is relevant, then the condition can be done as follows (you can't use Array here but Seq works):
cc.toSeq == ss.toSeq
or you a deep array comparison:
cc.deep == d.deep
First of all, I think you are trying to compare a Array[org.apache.spark.sql.Row] with an Array[String]. I believe you should change how you load the headers to something like: val cc = spark.read.format("csv").option("header", "true").load(fileName).columns.toArray.
Then you could compare using cc.deep == ss.deep.
Below code worked for me.
val cc= spark.read.csv("filepath").take(1)(0).toString
The above code gave output as String:[id,name,salary].
created one one stating schema as
val ss="[id,name,salary]"
then wrote the if else Conditions.

Spark Scala Dataset Validations using UDF and its Performance

I'm new to Spark Scala. I have implemented an solution for Dataset validation for multiple columns using UDF rather than going through individual columns in for loop. But i dint know how this is working faster and i have to explain it was the better solution.
The columns for data validation will be received at run time, so we cannot hard-coded the column names in code. And also the comments column needs to be updated with the column name when column value got failed in validation.
Old Code,
def doValidate(data: Dataset[Row], columnArray: Array[String], validValueArrays: Array[String]): Dataset[Row] = {
var ValidDF: Dataset[Row] = data
var i:Int = 0
for (s <- columnArray) {
var list = validValueArrays(i).split(",")
ValidDF = ValidDF.withColumn("comments",when(ValidDF.col(s).isin(list: _*),concat(lit(col("comments")),lit(" Error: Invalid Records in: ") ,lit(s))).otherwise(col("comments")))
i = i + 1
}
return ValidDF;
}
New Code,
def validateColumnValues(data: Dataset[Row], columnArray: Array[String], validValueArrays: Array[String]): Dataset[Row] = {
var ValidDF: Dataset[Row] = data
var checkValues = udf((row: Row, comment: String) => {
var newComment = comment
for (s: Int <- 0 to row.length-1) {
var value = row.get(s)
var list = validValueArrays(s).split(",")
if(!list.contains(value))
{
newComment = newComment + " Error:Invalid Records in: " + columnArray(s) +";"
}
}
newComment
});
ValidDF = ValidDF.withColumn("comments",checkValues(struct(columnArray.head, columnArray.tail: _*),col("comments")))
return ValidDF;
}
columnArray --> Will have list of columns
validValueArrays --> Will have Valid Values Corresponding to column array position. The multiple valid values will be , separated.
I want to know which one better or any other better approach to do it. When i tested new code looks better. And also what is the difference between this two logic's as i read UDF is a black-box for Spark. And in this case the UDF will affect performance in any case?
I need to correct some closed bracket before running it. One '}' to be removed when you return the validDF. I still get a runtime analysis error.
It is better to avoid UDF as a UDF implies deserialization to process the data in classic Scala and then reserialize it. However, if your requirement cannot be archived using in build SQL function, then you have to go for UDF but you must make sure you review the SparkUI for performance and plan of execution.

Scala Spark not returning value outside loop [duplicate]

I am new to Scala and Spark and would like some help in understanding why the below code isn't producing my desired outcome.
I am comparing two tables
My desired output schema is:
case class DiscrepancyData(fieldKey:String, fieldName:String, val1:String, val2:String, valExpected:String)
When I run the below code step by step manually, I actually end up with my desired outcome. Which is a List[DiscrepancyData] completely populated with my desired output. However, I must be missing something in the code below because it returns an empty list (before this code gets called there are other codes that is involved in reading tables from HIVE, mapping, grouping, filtering, etc etc etc):
val compareCols = Set(year, nominal, adjusted_for_inflation, average_private_nonsupervisory_wage)
val key = "year"
def compare(table:RDD[(String, Iterable[Row])]): List[DiscrepancyData] = {
var discs: ListBuffer[DiscrepancyData] = ListBuffer()
def compareFields(fieldOne:String, fieldTwo:String, colName:String, row1:Row, row2:Row): DiscrepancyData = {
if (fieldOne != fieldTwo){
DiscrepancyData(
row1.getAs(key).toString, //fieldKey
colName, //fieldName
row1.getAs(colName).toString, //table1Value
row2.getAs(colName).toString, //table2Value
row2.getAs(colName).toString) //expectedValue
}
else null
}
def comparison() {
for(row <- table){
var elem1 = row._2.head //gets the first element in the iterable
var elem2 = row._2.tail.head //gets the second element in the iterable
for(col <- compareCols){
var value1 = elem1.getAs(col).toString
var value2 = elem2.getAs(col).toString
var disc = compareFields(value1, value2, col, elem1, elem2)
if (disc != null) discs += disc
}
}
}
comparison()
discs.toList
}
I'm calling the above function as such:
var outcome = compare(groupedFiltered)
Here is the data in groupedFiltered:
(1991,CompactBuffer([1991,7.14,5.72,39%], [1991,4.14,5.72,39%]))
(1997,CompactBuffer([1997,4.88,5.86,39%], [1997,3.88,5.86,39%]))
(1999,CompactBuffer([1999,5.15,5.96,39%], [1999,5.15,5.97,38%]))
(1947,CompactBuffer([1947,0.9,2.94,35%], [1947,0.4,2.94,35%]))
(1980,CompactBuffer([1980,3.1,6.88,45%], [1980,3.1,6.88,48%]))
(1981,CompactBuffer([1981,3.15,6.8,45%], [1981,3.35,6.8,45%]))
The table schema for groupedFiltered:
(year String,
nominal Double,
adjusted_for_inflation Double,
average_provate_nonsupervisory_wage String)
Spark is a distributed computing engine. Next to "what the code is doing" of classic single-node computing, with Spark we also need to consider "where the code is running"
Let's inspect a simplified version of the expression above:
val records: RDD[List[String]] = ??? //whatever data
var list:mutable.List[String] = List()
for {record <- records
entry <- records }
{ list += entry }
The scala for-comprehension makes this expression look like a natural local computation, but in reality the RDD operations are serialized and "shipped" to executors, where the inner operation will be executed locally. We can rewrite the above like this:
records.foreach{ record => //RDD.foreach => serializes closure and executes remotely
record.foreach{entry => //record.foreach => local operation on the record collection
list += entry // this mutable list object is updated in each executor but never sent back to the driver. All updates are lost
}
}
Mutable objects are in general a no-go in distributed computing. Imagine that one executor adds a record and another one removes it, what's the correct result? Or that each executor comes to a different value, which is the right one?
To implement the operation above, we need to transform the data into our desired result.
I'd start by applying another best practice: Do not use null as return value. I also moved the row ops into the function. Lets rewrite the comparison operation with this in mind:
def compareFields(colName:String, row1:Row, row2:Row): Option[DiscrepancyData] = {
val key = "year"
val v1 = row1.getAs(colName).toString
val v2 = row2.getAs(colName).toString
if (v1 != v2){
Some(DiscrepancyData(
row1.getAs(key).toString, //fieldKey
colName, //fieldName
v1, //table1Value
v2, //table2Value
v2) //expectedValue
)
} else None
}
Now, we can rewrite the computation of discrepancies as a transformation of the initial table data:
val discrepancies = table.flatMap{case (str, row) =>
compareCols.flatMap{col => compareFields(col, row.next, row.next) }
}
We can also use the for-comprehension notation, now that we understand where things are running:
val discrepancies = for {
(str,row) <- table
col <- compareCols
dis <- compareFields(col, row.next, row.next)
} yield dis
Note that discrepancies is of type RDD[Discrepancy]. If we want to get the actual values to the driver we need to:
val materializedDiscrepancies = discrepancies.collect()
Iterating through an RDD and updating a mutable structure defined outside the loop is a Spark anti-pattern.
Imagine this RDD being spread over 200 machines. How can these machines be updating the same Buffer? They cannot. Each JVM will be seeing its own discs: ListBuffer[DiscrepancyData]. At the end, your result will not be what you expect.
To conclude, this is a perfectly valid (not idiomatic though) Scala code but not a valid Spark code. If you replace RDD with an Array it will work as expected.
Try to have a more functional implementation along these lines:
val finalRDD: RDD[DiscrepancyData] = table.map(???).filter(???)

Array of Sequences in Scala

I'm trying to read distinct values column wise in a data frame and store them in a Array of sequence
def getColumnDistinctValues(df: DataFrame, colNames:String): Unit = {
val cols: Array[String] = colNames.split(',')
cols.foreach(println) // print column names
var colDistValues: Array[Seq[Any]] = null
for (i <- 0 until cols.length) {
colDistValues(i) = df.select(cols(i)).distinct.map(x => x.get(0)).collect // read distinct values from each column
}
The assignment to colDistValues(i) doesn't work and always results in null pointer exception, what is the correct syntax to assign it the distinct values for each column?
Regards
You're trying to access the ith index of a null pointer (which you assign yourself), of course you'll get a NullPointerException. You don't need to initialize an Array[T] beforehand, let the returned collection do that for you:
val colDistValues: Array[Array[Any]] =
cols.map(c => df.select(c).distinct.map(x => x.get(0)).collect)
You are initialising the colDistValues to null.
Replace
var colDistValues: Array[Seq[Any]] = null
with
var colDistValues: Array[Seq[Any]] = Array.ofDim[Seq[Any]](cols.length)

Scala Retrieve aggregate pattern inside flat map

I have this scenario when trying to aggregate the result of few methods .
What im trying to do is im getting a Future of an object that contain a list of objects .
Then im flat mapping the future to a list of strings
and then i would like to iterate this list
Than invoke couple of methods if it possible in a async way and then merge the result to one object i have to wait that all method will finish
and send the result to a database.
This is where i stuck ....
UPDATE
I edit the method as suggested in the comments now i getting type mismatch error
expected List[postMd.PostMD]....
,
def getComplatePost(url: String): Unit = {
val postMd = new PostMetaData
val com = new Comments
val post = new Post
val fullPost = new CompletePost
val postMdList: Future[List[postMd.PostMD]] = postMd.getPostMetaData(url, "396697410351933") // get the list of id
postMdList.flatMap(x => {
val fromid = x.map(_.fromID) //extract the Future to a list of string
for {
id <- fromid
val c = com.getComments(id)
val p = post.getPost(id)
}yield (c,p)
})
}
thanks
miki
You can use a 'for - yield' this case. 'for' executes the yield operation after each of its statement completes.
This mainly hold as, 'for' is a syntactic sugar which further gets mapped to nested flatMap or map or foreach ... depending on the operations being performed.
Below is a modified snippet from your code:
for {
id <- fromid
//invoke methods
c = com.getComments(id)
p = post.getPost(id)
k = post.getPostLikes(id)
} // merge results and send data to DB