I have been using Spark Scala for a long time, new to PySpark.
I am trying to setup PyCharm for a spark project. Everything is setup from a dependencies point of view (pip install spark for e.g.). I can create a new python file and write spark code, everything is resolved. Here's a snippet of the code:
from pyspark.sql import SparkSession
spark=SparkSession.builder.enableHiveSupport.getOrCreate()
data = spark.sql ('select * from db.tbl')
At this point should I expect data to be a DataFrame? When I type data. I expect PyCharm to tell me the possible methods like filter, join etc as a drop down, but it does not.
Is there anything more I need to do for this to work? I am using python 2.7 (have to, since that's what our hadoop cluster supports)
In Python, variables are dynamically typed so you declare them without their types.
But starting from Python 3.6+, you can declare the variable type like this :
data : DataFrame = spark.sql ('select * from db.tbl')
This way you let PyCharm know what is the type of data and will suggest possible methods for that object.
Related
My company is implementing Azure Data Explorer (ADX) as a backend. They also want to use Databricks for Data Science projects including data exploration. I'm in charge of connecting Databricks to ADX.
I first tried Azure Kusto package.
from azure.kusto.data import KustoClient, KustoConnectionStringBuilder
from azure.kusto.data.exceptions import KustoServiceError
from azure.kusto.data.helpers import dataframe_from_result_table
import pandas as pd
...
df = dataframe_from_result_table(RESPONSE.primary_results[0])
Full steps here
Functionlly this works well.
But it loses completely the lazy loading feature of both ADX and Databricks-Spark.
I thought that because df is a just a Pandas dataframe, also if I try to convert this to a hive table it persists the data, which is not required, as we need fresh online data we don't want a local copy.
The next thing I've tried was to have this loaded in a spark dataframe. I've tried the following code (after installing the relevant libraries)
df = spark.read. \
format("com.microsoft.kusto.spark.datasource"). \
option("kustoCluster", KUSTO_URI). \
option("kustoDatabase",KUSTO_DATABASE). \
option("kustoQuery", "some_table_in_adx"). \
option("kustoAadAppId",CLIENT_ID). \
option("kustoAadAppSecret",CLIENT_SECRET). \
option("kustoAadAuthorityID", AAD_TENANT_ID).load()
which again loads the data into the spark dataframe without any issue.
However, performance wise it's far far away from the direct query in ADX. A count in a table of 600 thousands records is subseconds in ADX while it's more than 20 seconds in the Databricks notebook on a DS3_V2 14GB 4 cores.
Before even to consider a SaveAsTable or CreateOrReplaceTempView I wonder why I'm experiencing this performance issue. So my questions are :
Does this connection use the lazy loading (I know doing a count is not the right way to check that)?
If not is there any way to have lazy loading instea of loading the full table in dataframes before doing operations
what would happen if I create a hive table from the spark dataframe, will it copy the data or still have a virtual table pointing to ADX
Thanks for your help
For the Python part -
Pandas is not Spark dataframe therefore it's not lazy computed, to utilize these together you may use Spark parallelize.
For the Spark ADX connector -
This is indeed lazy loading. It is not evaluated until some evaluation method was requested - like the count.
If the count was done by Spark syntax i.e spark.read.kusto...count() then it would cause all the data to be first brought into spark and then operate count on it - so 20 seconds sounds legit, to compare with a query simply change the value of "kustoQuery" to "some_table_in_adx | count" which will lead to a count on the ADX side - uploading to spark just the final int result.
The connector offers simple Kusto query or distributed export command via the readMode option with ForceSingleMode to perform a simple Kusto query from driver node - as explained in the docs
As Hive table operated over dataframes - once you create a dataframe from Kusto read - it will also be lazy evaluated with table operations.
I'm quite new to pySpark but I'm confused about the difference between a spark Dataframe (created for example from an RDD ) and a pandas-on-spark Dataframe.
Are those the same object ? Looking at the type it seems they are different classes.
What's the core difference, if any ? (I know that working with pandas-on-spark Dataframe you can use almost the same syntax of Pandas on a distributed Dataframe but I'm wondering if is only this one the difference )
Thanks
Answering directly:
Are those the same object ? Looking at the type it seems they are different classes.
No, they are completely different objects (classes).
What's the core difference, if any ?
A pySpark DataFrame is an object from the PySpark library, with its own API and it can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.
A Pandas-on-Spark DataFrame and pandas DataFrame are similar. However, the former is distributed and the latter is in a single machine. When converting to each other, the data is transferred between multiple machines and the single client machine.
A Pandas DataFrame, is an object from the pandas library, also with its own API and it can also be constructed from a wide range of methods.
Also, I recommend checking this documentation about Pandas on Spark
I wanted to Convert scala dataframe into pandas data frame
val collection = spark.read.sqlDB(config)
collection.show()
#Should be like df=collection
You are asking for a way of using a Python library from Scala. This is a bit weird to me. Are you sure you have to do that? Maybe you know that, but Scala DataFrames have a good API that will probably give you the functionality you need from pandas.
If you still need to use pandas, I would suggest you to write the data that you need to a file (a csv, for example). Then, using a Python application you can load that file into a pandas dataframe and work from there.
Trying to create a pandas object from Scala is probably overcomplicating things (and I am not sure it is currently possible).
I think If you want to use pandas based API in SPARK code, then you can install Koalas-Python library. So, Whatever the function you want to use from pandas API directly you can embed them in SPARK code.
To install kolas
pip install koalas
I have to migrate to Spark 2.1 an application written in Scala 2.10.4 using Spark 1.6.
The application treats text files with around 7GB of dimension, and contains several rdd transformations.
I was told to try to recompile it with scala 2.11, which should be enough to make it work with Spark 2.1. This sounds strange to me as I know in Spark 2 there are some relevant changes, like:
Introduction of SparkSession object
Merge of DataSet and DataFrame
APIs
I managed to recompile the application in spark 2 with scala 2.11 with only minor changes due to Kryo Serializer registration.
I still have some runtime error that I am trying to solve and I am trying to figure out what will come next.
My question regards what changes are "neccessary" in order to make the application work as before, and what changes are "recommended" in terms of performance optimization (I need to keep at least the same level of performances), and whatever you think could be useful for a newbie in spark :).
Thanks in advance!
I did the same 1 year ago, there are not many changes you need to do, what comes in my mind:
if your code is cluttered with spark/sqlContext, then just extract this variable from SparkSession instace at the beginning of your code.
df.map switched to RDD API in Spark 1.6, in Spark 2.+ you stay in DataFrame API (which now has a map method). To get same functionality as before, replace df.map with df.rdd.map. The same is true for df.foreach and df.mapPartitions etc
unionAll in Spark 1.6 is just union in Spark 2.+
The databrick csv library is now included in Spark.
When you insert into a partitioned hive table, then the partition columns must now come as last column in the schema, in Spark 1.6 it had to be the first column
What you should consider (but would require more work):
migrate RDD-Code into Dataset-Code
enable CBO (cost based optimizer)
collect_list can be used with structs, in Spark 1.6 it could only be used with primitives. This can simplify some things
Datasource API was improved/unified
leftanti join was introduced
I am trying to find a way to interpret the table names from spark sql.
The answer given here is in Scala How to get table names from SQL query?
I want to change this into pyspark.
For that I want to import the library
org.apache.spark.sql.catalyst.analysis.UnresolvedRelation (or its equivalent) into pyspark.
Can this be done?