Lazy loading Azure Data Explorer data into a Databricks workspace - pyspark

My company is implementing Azure Data Explorer (ADX) as a backend. They also want to use Databricks for Data Science projects including data exploration. I'm in charge of connecting Databricks to ADX.
I first tried Azure Kusto package.
from azure.kusto.data import KustoClient, KustoConnectionStringBuilder
from azure.kusto.data.exceptions import KustoServiceError
from azure.kusto.data.helpers import dataframe_from_result_table
import pandas as pd
...
df = dataframe_from_result_table(RESPONSE.primary_results[0])
Full steps here
Functionlly this works well.
But it loses completely the lazy loading feature of both ADX and Databricks-Spark.
I thought that because df is a just a Pandas dataframe, also if I try to convert this to a hive table it persists the data, which is not required, as we need fresh online data we don't want a local copy.
The next thing I've tried was to have this loaded in a spark dataframe. I've tried the following code (after installing the relevant libraries)
df = spark.read. \
format("com.microsoft.kusto.spark.datasource"). \
option("kustoCluster", KUSTO_URI). \
option("kustoDatabase",KUSTO_DATABASE). \
option("kustoQuery", "some_table_in_adx"). \
option("kustoAadAppId",CLIENT_ID). \
option("kustoAadAppSecret",CLIENT_SECRET). \
option("kustoAadAuthorityID", AAD_TENANT_ID).load()
which again loads the data into the spark dataframe without any issue.
However, performance wise it's far far away from the direct query in ADX. A count in a table of 600 thousands records is subseconds in ADX while it's more than 20 seconds in the Databricks notebook on a DS3_V2 14GB 4 cores.
Before even to consider a SaveAsTable or CreateOrReplaceTempView I wonder why I'm experiencing this performance issue. So my questions are :
Does this connection use the lazy loading (I know doing a count is not the right way to check that)?
If not is there any way to have lazy loading instea of loading the full table in dataframes before doing operations
what would happen if I create a hive table from the spark dataframe, will it copy the data or still have a virtual table pointing to ADX
Thanks for your help

For the Python part -
Pandas is not Spark dataframe therefore it's not lazy computed, to utilize these together you may use Spark parallelize.
For the Spark ADX connector -
This is indeed lazy loading. It is not evaluated until some evaluation method was requested - like the count.
If the count was done by Spark syntax i.e spark.read.kusto...count() then it would cause all the data to be first brought into spark and then operate count on it - so 20 seconds sounds legit, to compare with a query simply change the value of "kustoQuery" to "some_table_in_adx | count" which will lead to a count on the ADX side - uploading to spark just the final int result.
The connector offers simple Kusto query or distributed export command via the readMode option with ForceSingleMode to perform a simple Kusto query from driver node - as explained in the docs
As Hive table operated over dataframes - once you create a dataframe from Kusto read - it will also be lazy evaluated with table operations.

Related

how can I see the multiple queries that gets generated for reading data for each partition in parallel from database using spark

I am trying to read data from Postgres table using Spark. Initially I was reading the data on the single thread without using lowerBound, upperBound, partitionColumn and numPartitions. The data that I'm reading is huge, around 120 Million records. So I decided to read the data in parallel using partitionColumn. I am able to read the data but it is taking more time to read it by 12 parallel threads than by a single thread. I am unable to figure out how can I see the 12 SQL queries that gets generated to fetch the data in parallel for each partition.
The code that I am using is:
val query = s"(select * from db.testtable) as testquery"
val df = spark.read
.format("jdbc")
.option("url", jdbcurl)
.option("dbtable", query)
.option("partitionColumn","transactionbegin")
.option("numPartitions",12)
.option("driver", "org.postgresql.Driver")
.option("fetchsize", 50000)
.option("user","user")
.option("password", "password")
.option("lowerBound","2019-01-01 00:00:00")
.option("upperBound","2019-12-31 23:59:00")
.load
df.count()
Where and how can I see the 12 parallel queries that are getting created to read the data in parallel on each thread?
I am able to see that 12 tasks are created in the Spark UI but not able to find a way to see what separate 12 queries are generated to fetch data in parallel from the Postgres table.
Is there any way I can push the filter down so that it reads only this year worth of data, in this case 2019.
The SQL statement is printed using "info" log level, see here. You need to change Spark's log level to "info" to see the SQL. Additionally it printed the where condition alone too as here.
You can also view the SQL in your Postgresql database using pg_stat_statements view which requires a separate plugin to be installed. There is a way to log the SQLs and see them as mentioned here.
I suspect the parallelism is slow for you because there is no index on the "transactionbegin" column of your table. The partitionColumn must be indexed otherwise it will scan the entire table in all those parallel sessions which will choke.
It's not exactly multiple queries, but it will actually show the plan of execution that Spark has optimized based on your queries. It may not be perfect depending on stages you have to execute.
You can write your dag in the form of DataFrame and before actually calling an action, you can use explain() method on it. Reading it can be challenging, but it's upside down. Source is on the bottom while reading this. It may seem little bit unusual if you try to read, so start with basic transformations and go step by step if you're reading first time.

GCP Dataproc spark consuming BigQuery

I'm very new on GCP Google Cloud Platform, so I hope my question will not look so silly.
Footstage:
The main goals is gather few extend tables from BigQuery and apply few transformations. Because of the size of the tables I'm planning use Dataproc deploying a Pyspark script, ideally I would be able to use sqlContext to apply few sql queries to the DFs (tables pulled from BQ). Finally, I could easily dump this info into a file within a data storage bucket.
Questions :
Can I use import google.datalab.bigquery as bq within my Pyspark script?
Is this proposed schema the most efficient or instead I might validate any other? keep in mind that I need to create many temporal queries and this is why I though on Spark.
I expect to use pandas and bq to read the results queries as pandas df following this example. Later, I might use sc.parallelize from Spark to transform the pandas df into a spark df. Is this approach the right one?
my script
Update:
After have a back and forth with #Tanvee that kindly attend this question we conclude that GCP requires an intermediate allocation step when you need to read data from DataStorage into your Dataproc. Briefly, your spark or hadoop script might need a temporal bucket where store the data from the table and then bring it into Spark.
References:
Big Query Connector \
Deployment
thanks so much
You will need to use BigQuery connector for spark. There are some examples in the GCP documentation here and here. It will create RDD which you can convert to dataframe and then you will be able to perform all typical transformations. Hope that helps.
You can directly use following options to connect bigquery table from spark.
You can also use spark-bigquery connectors https://github.com/samelamin/spark-bigquery to directly run your queries on dataproc using spark.
https://github.com/GoogleCloudPlatform/spark-bigquery-connector This is new connector which is in beta. This is spark datasource api to bigquery which is easy to use.
Please refer following link:
Dataproc + BigQuery examples - any available?

Is really Hive on Tez with ORC performance better than Spark SQL for ETL?

I have little experience in Hive and currently learning Spark with Scala. I am curious to know whether Hive on Tez really faster than SparkSQL. I searched many forums with test results but they have compared older version of Spark and most of them are written in 2015. Summarized main points below
ORC will do the same as parquet in Spark
Tez engine will give better performance like Spark engine
Joins are better/faster in Hive than Spark
I feel like Hortonworks supports more for Hive than Spark and Cloudera vice versa.
sample links :
link1
link2
link3
Initially I thought Spark would be faster than anything because of their in-memory execution. after reading some articles I got Somehow existing Hive also getting improvised with new concepts like Tez, ORC, LLAP etc.
Currently running with PL/SQL Oracle and migrating to big data since volumes are getting increased. My requirements are kind of ETL batch processing and included data details involved in every weekly batch runs. Data will increase widely soon.
Input/lookup data are csv/text formats and updating into tables
Two input tables which has 5 million rows and 30 columns
30 look up tables used to generate each column of output table which contains around 10 million rows and 220 columns.
Multiple joins involved like inner and left outer since many look up tables used.
Kindly please advise which one of below method I should choose for better performance with readability and easy to include minor updates on columns for future production deployment.
Method 1:
Hive on Tez with ORC tables
Python UDF thru TRANSFORM option
Joins with performance tuning like map join
Method 2:
SparkSQL with Parquet format which is converting from text/csv
Scala for UDF
Hope we can perform multiple inner and left outer join in Spark
The best way to implement the solution to your problem as below.
To load the data into the table the spark looks good option to me. You can read the tables from the hive metastore and perform the incremental updates using some kind of windowing functions and register them in hive. While ingesting as data is populated from various lookup table, you are able to write the code in programatical way in scala.
But at the end of the day, there need to be a query engine that is very easy to use. As your spark program register the table with hive, you can use hive.
Hive support three execution engines
Spark
Tez
Mapreduce
Tez is matured, spark is evolving with various commits from Facebook and community.
Business can understand hive very easily as a query engine as it is much more matured in the industry.
In short use spark to process the data for daily processing and register them with hive.
Create business users in hive.

Greenplum, Pivotal HD + Spark, or HAWQ for TBs of Structured Data?

I have TBs of structured data in a Greenplum DB. I need to run what is essentially a MapReduce job on my data.
I found myself reimplementing at least the features of MapReduce just so that this data would fit in memory (in a streaming fashion).
Then I decided to look elsewhere for a more complete solution.
I looked at Pivotal HD + Spark because I am using Scala and Spark benchmarks are a wow-factor. But I believe the datastore behind this, HDFS, is going to be less efficient than Greenplum. (NOTE the "I believe". I would be happy to know I am wrong but please give some evidence.)
So to keep with the Greenplum storage layer I looked at Pivotal's HAWQ which is basically Hadoop with SQL on Greenplum.
There are a lot of features lost with this approach. Mainly the use of Spark.
Or is it better to just go with the built-in Greenplum features?
So I am at the crossroads of not knowing which way is best. I want to process TBs of data that fits the relational DB model well, and I would like the benefits of Spark and MapReduce.
Am I asking for too much?
Before posting my answer, I want to rephrase the question based on my understanding (to make sure I understand the question correctly) as follows:
You have TBs of data that fits the relational DB model well, and you want to query the data using SQL most of the time (I think that's why you put it into Greenplum DB), but sometimes you want to use Spark and MapReduce to access the data because of their flexibility.
If my understanding is correct, I strongly recommend that you should have a try with HAWQ. Some features of HAWQ make it fit your requirements perfectly (Note: I may be biased, since I am a developer of HAWQ).
First of all, HAWQ is a SQL on Hadoop database, which means it employs HDFS as its datastore. HAWQ doesn't keep with the Greenplum DB storage layer.
Secondly, it is hard to argue against that "HDFS is going to less efficient than Greenplum". But the performance difference is not as significant as you might think. We have done some optimizations for accessing HDFS data. One example is that, if we find one data block is stored locally, we read it directly from disk rather than through normal RPC calls.
Thirdly, there is a feature with HAWQ named HAWQ InputFormat for MapReduce (which Greenplum DB doesn't have). With that feature, you can write Spark and MapReduce code to access the HAWQ data easily and efficiently. Different from the DBInputFormat provided by Hadoop (which would make the master become the performance bottleneck, since all the data goes through the master first), HAWQ InputFormat for MapReduce lets your Spark and MapReduce code access the HAWQ data stored in HDFS directly. It is totally distributed, and thus is very efficient.
Lastly, of course, you still can use SQL to query your data with HAWQ, just like what you do with Greenplum DB.
Have you tried using Spark - JDBC connector to read the Spark data ?
Use the partition column, lower bound, upper bound, and numPartitions to split the greenplum table across multiple Spark workers.
For example, you can use this example
import java.util.Random
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
object SparkGreenplumApp extends App {
val conf = new SparkConf().setAppName("SparkGreenplumTest")
val sparkContext = new SparkContext(conf)
val sqlContext = new SQLContext(sparkContext)
import sqlContext.implicits._
val df = sqlContext.load("jdbc", Map(
"url" -> "jdbc:postgresql://servername:5432/databasename?user=username&password=*******",
"dbtable" -> "(select col, col2, col3 where datecol > '2017-01-01' and datecol < '2017-02-02' ) as events",
"partitionColumn"-> "ID",
"lowerBound"->"100",
"upperBound"->"500",
"numPartitions"->"2",
"driver" -> "org.postgresql.Driver"))
}

Incrementally updating/adding data on HDFS

In my application there are 4 tables and each table is having more than 1 million data.
currently my java based reporting engine joins all the tables and get the data to show in reports.
Now i want to introduce Hadoop using sqoop. I have installed hadoop 2.2 and sqoop 1.9.
I have done a small POC to import the data in hdfs. problem is that, every time it creates new data file.
My need is :
there would be a scheduler which will run once in day, and It will:
Pick the data from all four tables and load in hdfs using sqoop.
PIG will do some transformation and joining in data and will prepare the concrete de normalized data.
Sqoop will again export this data in a separate eporting table.
I have few questions around this:
Do i need to import whole data from DB to HDFS on every sqoop import call ?
in the master table some data is updated and some data in new so how can i handle that if i merge the data while loading in HDFS.
at the time of export do i need to export whole data again to reporting table. If Yes, how would i do that.
Please help me out in this case...
Please suggest me the better solution if you have..
Sqoop support incremental and delta imports. Check the Sqoop documentation here for more details.