Can join operations be demanded to database when using Spark SQL? - scala

I am not an expert of Spark SQL API, nor of the underlying RDD one.
But, knowing of the Catalyst optimization engine, I would expect Spark to try and minimize in-memory effort.
This is my situation:
I have, let's say, two table
TABLE GenericOperation (ID, CommonFields...)
TABLE SpecificOperation (OperationID, SpecificFields...)
They are both quite huge (~500M, not big data, but unfeasible to have as a whole in memory in a standard application server)
That said, suppose I have to retrieve using Spark (part of a larger use case) all the SpecificOperation instances that match some particular condition on fields that belong to GenericOperation.
This is the code that I am using:
val gOps = spark.read.jdbc(db.connection, "GenericOperation", db.properties)
val sOps = spark.read.jdbc(db.connection, "SpecificOperation", db.properties)
val joined = sOps.join(gOps).where("ID = OperationID")
joined.where("CommonField= 'SomeValue'").select("SpecificField").show()
Problem is, when it comes to run the above, I can see from SQL Profiler that Spark does not execute the join on the database, but rather retrieves all the OperationID from SpecificOperation, and then I assume it will be running all the merge in memory. Since no filter is applicable on SpecificOperation, such retrieve would bring a lot, too much, data to the end system.
Is it possible to write the above so that the join is demanded directly to dbms?
Or it depends on some magic configuration of Spark I am not aware of?
Of course, I could simply hardcode the join as a subquery when retrieving, but that's not feasible in my case: statements hve to be created at runtime starting from simple building blocks. Hence, I need to implement this starting from two spark.sql.DataFrame already built up
As a side note, I am running this with Spark 2.3.0 for Scala 2.11, against a SQL Server 2016 database instance.

Is it possible to write the above so that the join is demanded directly to dbms? Or it depends on some magic configuration of Spark I am not aware of?
Excluding statically generated queries (In Apache Spark 2.0.0, is it possible to fetch a query from an external database (rather than grab the whole table)?), Spark doesn't support join pushdown. Only predicates and selection can be delegated to the source.
There is no magic configuration or code that could even support this type of process.
In general if server can handle join, data is usually not large enough to benefit from Spark.

Related

What is Catalyst Optimizer in Apache Spark (SQL)?

I want to know more about Catalyst Optimizer in Apache PySpark (SQL).Is it possible to use Catalyst Optimizer in pyspark dataframe.
Dataframes created using SQL can leverage the Spark Catalyst framework.
Using pyspark(assuming variable spark is bound to SparkSession), we could invoke a SQL like
spark.sql(<sql>)
This would be analyzed, optimized and physical plans created by Catalyst framework.
If the Dataframe is hand-constructed like spark.table<>.sort(<col>), Spark catalyst does not come into play.
If we want to leverage Catalyst framework for a dataframe which is not a table backed by a metastore, we could register it as a temp table, and use SQL to fire queries.
Catalyst Optimizer is Spark's internal SQL engine. Spark Dataframe's use the Catalyst Optimizer under the hood to build a query plan to best decide how the code should be executed across the cluster to scale performance, etc. Instead of rambling/writing an essay on specifics here are some great reads. Enjoy!
https://databricks.com/glossary/catalyst-optimizer

Saving JDBC db data as shared state Spark

I have an MSSQL table as a data source and I would like to save some kind of the processing offset in the form of the timestamp (it is one of the table's columns). So it would be possible to process the data from the latest offset. I would like to save as some kind of shared state between Spark sessions. I have researched shared state in Spark session, however, I did not find the way to store this offset in the shared state. So is it possible to use existing Spark constructs to perform this task?
As far as I know there is no official built-in feature supporting passing data between sessions in Spark. As alternative I would consider the following options/suggestions:
First the offset column must be an indexed field in MSSQL in order to be able to query it fast.
If there is already an in-memory (i.e Redis, Apache Ignite) system installed and used by your project I would store there the offset.
I wouldn't use a message queue system such as Kafka because once you consume one message you will need to resend it therefore that would't make sense.
As solution I would prefer to save it in the filesystem or in Hive even if it would add extra overhead since you will have only one value in that table. In the case of the filesystem of course the performance would be much better.
Let me know if further information is needed

Is really Hive on Tez with ORC performance better than Spark SQL for ETL?

I have little experience in Hive and currently learning Spark with Scala. I am curious to know whether Hive on Tez really faster than SparkSQL. I searched many forums with test results but they have compared older version of Spark and most of them are written in 2015. Summarized main points below
ORC will do the same as parquet in Spark
Tez engine will give better performance like Spark engine
Joins are better/faster in Hive than Spark
I feel like Hortonworks supports more for Hive than Spark and Cloudera vice versa.
sample links :
link1
link2
link3
Initially I thought Spark would be faster than anything because of their in-memory execution. after reading some articles I got Somehow existing Hive also getting improvised with new concepts like Tez, ORC, LLAP etc.
Currently running with PL/SQL Oracle and migrating to big data since volumes are getting increased. My requirements are kind of ETL batch processing and included data details involved in every weekly batch runs. Data will increase widely soon.
Input/lookup data are csv/text formats and updating into tables
Two input tables which has 5 million rows and 30 columns
30 look up tables used to generate each column of output table which contains around 10 million rows and 220 columns.
Multiple joins involved like inner and left outer since many look up tables used.
Kindly please advise which one of below method I should choose for better performance with readability and easy to include minor updates on columns for future production deployment.
Method 1:
Hive on Tez with ORC tables
Python UDF thru TRANSFORM option
Joins with performance tuning like map join
Method 2:
SparkSQL with Parquet format which is converting from text/csv
Scala for UDF
Hope we can perform multiple inner and left outer join in Spark
The best way to implement the solution to your problem as below.
To load the data into the table the spark looks good option to me. You can read the tables from the hive metastore and perform the incremental updates using some kind of windowing functions and register them in hive. While ingesting as data is populated from various lookup table, you are able to write the code in programatical way in scala.
But at the end of the day, there need to be a query engine that is very easy to use. As your spark program register the table with hive, you can use hive.
Hive support three execution engines
Spark
Tez
Mapreduce
Tez is matured, spark is evolving with various commits from Facebook and community.
Business can understand hive very easily as a query engine as it is much more matured in the industry.
In short use spark to process the data for daily processing and register them with hive.
Create business users in hive.

Greenplum, Pivotal HD + Spark, or HAWQ for TBs of Structured Data?

I have TBs of structured data in a Greenplum DB. I need to run what is essentially a MapReduce job on my data.
I found myself reimplementing at least the features of MapReduce just so that this data would fit in memory (in a streaming fashion).
Then I decided to look elsewhere for a more complete solution.
I looked at Pivotal HD + Spark because I am using Scala and Spark benchmarks are a wow-factor. But I believe the datastore behind this, HDFS, is going to be less efficient than Greenplum. (NOTE the "I believe". I would be happy to know I am wrong but please give some evidence.)
So to keep with the Greenplum storage layer I looked at Pivotal's HAWQ which is basically Hadoop with SQL on Greenplum.
There are a lot of features lost with this approach. Mainly the use of Spark.
Or is it better to just go with the built-in Greenplum features?
So I am at the crossroads of not knowing which way is best. I want to process TBs of data that fits the relational DB model well, and I would like the benefits of Spark and MapReduce.
Am I asking for too much?
Before posting my answer, I want to rephrase the question based on my understanding (to make sure I understand the question correctly) as follows:
You have TBs of data that fits the relational DB model well, and you want to query the data using SQL most of the time (I think that's why you put it into Greenplum DB), but sometimes you want to use Spark and MapReduce to access the data because of their flexibility.
If my understanding is correct, I strongly recommend that you should have a try with HAWQ. Some features of HAWQ make it fit your requirements perfectly (Note: I may be biased, since I am a developer of HAWQ).
First of all, HAWQ is a SQL on Hadoop database, which means it employs HDFS as its datastore. HAWQ doesn't keep with the Greenplum DB storage layer.
Secondly, it is hard to argue against that "HDFS is going to less efficient than Greenplum". But the performance difference is not as significant as you might think. We have done some optimizations for accessing HDFS data. One example is that, if we find one data block is stored locally, we read it directly from disk rather than through normal RPC calls.
Thirdly, there is a feature with HAWQ named HAWQ InputFormat for MapReduce (which Greenplum DB doesn't have). With that feature, you can write Spark and MapReduce code to access the HAWQ data easily and efficiently. Different from the DBInputFormat provided by Hadoop (which would make the master become the performance bottleneck, since all the data goes through the master first), HAWQ InputFormat for MapReduce lets your Spark and MapReduce code access the HAWQ data stored in HDFS directly. It is totally distributed, and thus is very efficient.
Lastly, of course, you still can use SQL to query your data with HAWQ, just like what you do with Greenplum DB.
Have you tried using Spark - JDBC connector to read the Spark data ?
Use the partition column, lower bound, upper bound, and numPartitions to split the greenplum table across multiple Spark workers.
For example, you can use this example
import java.util.Random
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
object SparkGreenplumApp extends App {
val conf = new SparkConf().setAppName("SparkGreenplumTest")
val sparkContext = new SparkContext(conf)
val sqlContext = new SQLContext(sparkContext)
import sqlContext.implicits._
val df = sqlContext.load("jdbc", Map(
"url" -> "jdbc:postgresql://servername:5432/databasename?user=username&password=*******",
"dbtable" -> "(select col, col2, col3 where datecol > '2017-01-01' and datecol < '2017-02-02' ) as events",
"partitionColumn"-> "ID",
"lowerBound"->"100",
"upperBound"->"500",
"numPartitions"->"2",
"driver" -> "org.postgresql.Driver"))
}

Group by In HBase

I almost do not know anything about HBase. Sorry for basic questions.
Imagine I have a table of 100 billion rows with 10 int, one datetime, and one string column.
Does HBase allow querying this table and Group the result based on key (even a composite key)?
If so, does it have to run a map/reduce job to it?
How do you feed it the query?
Can HBase in general perform real-time like queries on a table?
Data aggregation in HBase intersects with the "real time analytics" need. While HBase is not built for this type of functionality there is a lot of need for it. So the number of ways to do so is / will be developed.
1) Register HBase table as an external table in Hive and do aggregations. Data will be accessed via HBase API what is not that efficient. Configuring Hive with Hbase this is a discussion about how it can be done.
It is the most powerful way to group by HBase data. It does imply running MR jobs but by Hive, not by HBase.
2) You can write you own MR job working with HBase data sitting in HFiles in the HDFS. It will be most efficient way, but not simple and data you processed would be somewhat stale. It is most efficient since data will not be transferred via HBase API - instead it will be accesses right from HDFS in sequential manner.
3) Next version of HBase will contain coprocessors which would be able to aggregations inside specific regions. You can assume them to be a kind of stored procedures in the RDBMS world.
4) In memory, Inter-region MR job which will be parralelized in one node is also planned in the future HBase releases. It will enable somewhat more advanced analytical processing then coprocessors.
FAST RANDOM READS = PREPREPARED data sitting in HBase!
Use Hbase for what it is...
1. A place to store a lot of data.
2. A place from which you can do super fast reads.
3. A place where SQL is not gonna do you any good (use java).
Although you can read data from HBase and do all sorts of aggregates right in Java data structures before you return your aggregated result, its best to leave the computation to mapreduce. From your questions, it seems as if you want the source data for computation to sit in HBase. If this is the case, the route you want to take is have HBase as the source data for a mapreduce job. Do computations on that and return the aggregated data. But then again, why would you read from Hbase to run a mapreduce job? Just leave the data sitting HDFS/ Hive tables and run mapreduce jobs on them THEN load the data to Hbase tables "pre-prepared" so that you can do super fast random reads from it.
Once you have the preaggregated data in HBase, you can use Crux http://github.com/sonalgoyal/crux to further drill, slice and dice your HBase data. Crux supports composite and simple keys, with advanced filters and group by.