Issues with SparkSQL (Spark and Hive connectivity) - scala

I am trying to retrieve data from a database made in Hive into my Spark and even if there's data in the DB (I checked it with Hive) doing a query with Spark returns no rows (it returns the column information though).
I have copied the hive-site.xml file into the Spark configuration folder (was asked for).
IMPORTS
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql
import org.apache.spark.storage.StorageLevel
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.hive.HiveContext
Creating a Spark session:
val spark = SparkSession.builder().appName("Reto").config("spark.sql.warehouse.dir", "hive_warehouse_hdfs_path").enableHiveSupport().getOrCreate()
spark.sql("show databases").show()
Getting data:
spark.sql("USE retoiabd")
val churn = spark.sql("SELECT count(*) FROM churn").show()
Output:
count(1) = 0

After checking it out with our teacher there was an issue with the creation of the tables themselves in Hive.
We created the table like this:
CREATE TABLE name (columns)
Instead of like this:
CREATE EXTERNAL TABLE name (columns)

Related

PySpark dataframe to Hive table with partitions

I typically use the below code to write a PySpark data frame into a Hive table. I have a column pxn_dt which will be used to partition the table.
How can I modify the code below so that it will create partitions into the table (with the new month) the next time I run the script?
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
from pyspark.sql.functions import *
spark = SparkSession.builder.enableHiveSupport().getOrCreate()
sqlContext = SQLContext(spark)
df.createOrReplaceTempView("mytempTable")
sqlContext.sql("create table my_db.table from mytempTable")
I'm trying to use the below line instead but it doesn't seem to work.
sqlContext.sql("create table my_db.table from mytempTable partitioned by(pxn_dt)")

Pyspark - Looking to apply SQL queries to pyspark dataframes

Disclaimer: I'm very new to pyspark and this question might not be appropriate.
I've seen the following code online:
# Get the id, age where age = 22 in SQL
spark.sql("select id, age from swimmers where age = 22").show()
Now, I've tried to pivot using pyspark with the following code:
complete_dataset.createOrReplaceTempView("df")
temp = spark.sql("SELECT core_id from df")
This is the error I'm getting:
'java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;'
I figured this would be straightforward but I can't seem to find the solution. Is this posible to do in pyspark?
NOTE: I am on an EMR Cluster using a Pyspark notebook.
In pyspark you can read MySQL table (assuming that you are using MySQL) and create dataframe.
jdbc_url = 'jdbc:mysql://{}:{}#{}/{}?zeroDateTimeBehavior=CONVERT_TO_NULL'.format(
'usrname',
'password',
'host',
'db',
)
table_df = sql_ctx.read.jdbc(url=jdbc_url, table='table_name').select("column_name1", "column_name2")
Where table_df is the dataframe. The you can do required operations on dataframe like filter etc.
table_df.filter(table_df.column1 == 'abc').show()

Spark DataFrame turns empty after writing to table

I'm having some concerns regarding the behaviour of dataframes after writing them to Hive tables.
Context:
I run a Spark Scala (version 2.2.0.2.6.4.105-1) job through spark-submit in my production environment, which has Hadoop 2.
I do multiple computations and store some intermediate data to Hive ORC tables; after storing a table, I need to re-use the dataframe to compute a new dataframe to be stored in another Hive ORC table.
E.g.:
// dataframe with ~10 million record
val df = prev_df.filter(some_filters)
val df_temp_table_name = "temp_table"
val df_table_name = "table"
sql("SET hive.exec.dynamic.partition = true")
sql("SET hive.exec.dynamic.partition.mode = nonstrict")
df.createOrReplaceTempView(df_temp_table_name)
sql(s"""INSERT OVERWRITE TABLE $df_table_name PARTITION(partition_timestamp)
SELECT * FROM $df_temp_table_name """)
These steps always work and the table is properly populated with the correct data and partitions.
After this, I need to use the just computed dataframe (df) to update another table. So I query the table to be updated into dataframe df2, then I join df with df2, and the result of the join needs to overwrite the table of df2 (a plain, non-partitioned table).
val table_name_to_be_updated = "table2"
// Query the table to be updated
val df2 = sql(table_name_to_be_updated)
val df3 = df.join(df2).filter(some_filters).withColumn(something)
val temp = "temp_table2"
df3.createOrReplaceTempView(temp)
sql(s"""INSERT OVERWRITE TABLE $table_name_to_be_updated
SELECT * FROM $temp """)
At this point, df3 is always found empty, so the resulting Hive table is always empty as well. This happens also when I .persist() it to keep it in memory.
When testing with spark-shell, I have never encountered the issue. This happens only when the flow is scheduled in cluster-mode under Oozie.
What do you think might be the issue? Do you have any advice on approaching a problem like this with efficient memory usage?
I don't understand if it's the first df that turns empty after writing to a table, or if the issue is because I first query and then try to overwrite the same table.
Thank you very much in advance and have a great day!
Edit:
Previously, df was computed in an individual script and then inserted into its respective table. On a second script, that table was queried into a new variable df; then the table_to_be_updated was also queried and stored into a variable old_df2 let's say. The two were then joined and computed upon in a new variable df3, that was then inserted with overwrite into the table_to_be_updated.

Sort in descending order, using hive table in spark scala

I have a Hive table with account numbers and most recent updated dates. Not every account is updated each day, so I can't simply select all records from a certain day. I need to group by account number and then sort in descending order to take the most recent 2 days for each account. My script so far:
sc.setLogLevel("ERROR")
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
import org.apache.spark.sql.functions._
import sqlContext.implicits._
val df1 = sqlContext.sql("FROM mydb.mytable SELECT account_num, last_updated")
val DFGrouped = df1.groupBy("account_num").orderBy(desc("data_dt"))
I'm getting error on the orderBy:
value orderBy is not a member of org.apache.spark.sql.GroupedData
Any idea on what I should be doing here?
Grouping will not work here because this is a form of the top N by group problem.
You need to use Spark SQL window functions, in particular, rank() with partition by account ID and order by date descending, followed by selecting the rows with rank <= 2.

spark query execution time

I have a local hadoop single node and hive installed and I have some hive tables stored in hdfs. Then I configure Hive with MySQL Metastore. And now I installed spark and Im doing some queries over hive tables like this (in scala):
var hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
result = hiveContext.sql("SELECT * FROM USERS");
result.show
Do you know how to configure spark to show to the execution time of the query? Because for default it is not showing..
Use spark.time().
var hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
result = hiveContext.sql("SELECT * FROM USERS");
spark.time(result.show)
https://db-blog.web.cern.ch/blog/luca-canali/2017-03-measuring-apache-spark-workload-metrics-performance-troubleshooting