Spark: strange nullPointerException when extracting data from PostgreSQL - postgresql

I'm working with PostgreSQL 9.6 and Spark 2.0.0
I want to create a DataFrame form a postgreSQL table, as following:
val query =
"""(
SELECT events.event_facebook_id,
places.placeid, places.likes as placelikes,
artists.facebookId, artists.likes as artistlikes
FROM events
LEFT JOIN eventsplaces on eventsplaces.event_id = events.event_facebook_id
LEFT JOIN places on eventsplaces.event_id = places.facebookid
LEFT JOIN eventsartists on eventsartists.event_id = events.event_facebook_id
LEFT JOIN artists on eventsartists.artistid = artists.facebookid) df"""
The request is valid (if I run it on psql, I don't get any error) but with Spark,
if I execute the following code, I get a NullPointerException:
sqlContext
.read
.format("jdbc")
.options(
Map(
"url" -> claudeDatabaseUrl,
"dbtable" -> query))
.load()
.show()
If I change, in the query artists.facebookId by an other column as artists.description (which can be null contrary to facebookId), the exception disappears.
I find this very very strange, any idea?

You have different facebookId's in your query: artists.facebook[I]d and artists.facebook[i]d.
Please, try to use the correct one.

Related

How to execute hql file in spark with arguments

I have a hql file which accepts several arguments and I then in stand alone spark application, I am calling this hql script to create a dataframe.
This is a sample hql code from my script:
select id , name, age, country , created_date
from ${db1}.${table1} a
inner join ${db2}.${table2} b
on a.id = b.id
And in this is how I am calling it in my Spark script:
import scala.io.Source
val queryFile = `path/to/my/file`
val db1 = 'cust_db'
val db2 = 'cust_db2'
val table1 = 'customer'
val table2 = 'products'
val query = Source.fromFile(queryFile).mkString
val df = spark.sql(query)
When I am using this way, I am getting:
org.apache.spark.sql.catylyst.parser.ParserException
Is there a way to pass arguments directly to my hql file and then create a df out of the hive code.
Parameters can be injected with such code:
val parametersMap = Map("db1" -> db1, "db2" -> db2, "table1" -> table1, "table2" -> table2)
val injectedQuery = parametersMap.foldLeft(query)((acc, cur) => acc.replace("${" + cur._1 + "}", cur._2))

Sparksql using scala

val scc = spark.read.jdbc(url,table,properties)
val d = scc.createOrReplaceTempView(“k”)
spark.sql(“select * from k”).show()
if you observe here #1 we are reading complete table and then #3 we are fetching the results based on desired query. Here reading complete table and then querying takes alot of time. Can’t we execute our query while establishing connection ? please do help me if you have any prior knowledge about this .
Check this out.
var dbTable =
"(select emp_no, concat_ws(' ', first_name, last_name) as full_name from employees) as employees_name";
Dataset<Row> jdbcDF =
sparkSession.read().jdbc(CONNECTION_URL, dbTable,connectionProperties);

Spark: Create temporary table by executing sql query on temporary tables

I am using Spark and I would like to know: how to create temporary table named C by executing sql query on tables A and B ?
sqlContext
.read.json(file_name_A)
.createOrReplaceTempView("A")
sqlContext
.read.json(file_name_B)
.createOrReplaceTempView("B")
val tableQuery = "(SELECT A.id, B.name FROM A INNER JOIN B ON A.id = B.fk_id) C"
sqlContext.read
.format(SQLUtils.FORMAT_JDBC)
.options(SQLUtils.CONFIG())
.option("dbtable", tableQuery)
.load()
You need to save your results as temp table
tableQuery .createOrReplaceTempView("dbtable")
Permanant storage on external table you can use JDBC
val prop = new java.util.Properties
prop.setProperty("driver", "com.mysql.jdbc.Driver")
prop.setProperty("user", "vaquar")
prop.setProperty("password", "khan")
//jdbc mysql url - destination database is named "temp"
val url = "jdbc:mysql://localhost:3306/temp"
//destination database table
val dbtable = "sample_data_table"
//write data from spark dataframe to database
df.write.mode("append").jdbc(url, dbtable, prop)
https://docs.databricks.com/spark/latest/data-sources/sql-databases.html
http://spark.apache.org/docs/latest/sql-programming-guide.html#saving-to-persistent-tables
sqlContext.read.json(file_name_A).createOrReplaceTempView("A")
sqlContext.read.json(file_name_B).createOrReplaceTempView("B")
val tableQuery = "(SELECT A.id, B.name FROM A INNER JOIN B ON A.id = B.fk_id) C"
sqlContext.sql(tableQuery).createOrReplaceTempView("C")
Try the above code it will work.

Databricks Spark-Redshift: Sortkeys not working

I am trying to add the sort keys from scala code by following instructions here: https://github.com/databricks/spark-redshift
df.write
.format(formatRS)
.option("url", connString)
.option("jdbcdriver", jdbcDriverRS)
.option("dbtable", table)
.option("tempdir", tempDirRS + table)
.option("usestagingtable", "true")
.option("diststyle", "KEY")
.option("distkey", "id")
.option("sortkeyspec", "INTERLEAVED SORTKEY (id,timestamp)")
.mode(mode)
.save()
The sort keys are being implemented wrong because when I am checking the table info:
sort key = INTERLEAVEDˇ
I need the right way to add the sort keys.
There is no wrong with the implementation, the wrong is from the "checking query" it returns
sort key = interleavedˇ
which is confusing enough to believe that there is something wrong happening.
so if you need to check the interleaved sort keys you should run this query:
select tbl as tbl_id, stv_tbl_perm.name as table_name,
col, interleaved_skew, last_reindex
from svv_interleaved_columns, stv_tbl_perm
where svv_interleaved_columns.tbl = stv_tbl_perm.id
and interleaved_skew is not null;

Column name cannot be resolved in SparkSQL join

I'm not sure why this is happening. In PySpark, I read in two dataframes and print out their column names and they are as expected, but then when do a SQL join I get an error that cannot resolve column name given the inputs. I have simplified the merge just to get it to work, but I will need to add in more join conditions which is why I'm using SQL (will be adding in: "and b.mnvr_bgn < a.idx_trip_id and b.mnvr_end > a.idx_trip_data"). It appears that the column 'device_id' is being renamed to '_col7' in the df mnvr_temp_idx_prev_temp
mnvr_temp_idx_prev = mnvr_3.select('device_id', 'mnvr_bgn', 'mnvr_end')
print mnvr_temp_idx_prev.columns
['device_id', 'mnvr_bgn', 'mnvr_end']
raw_data_filtered = raw_data.select('device_id', 'trip_id', 'idx').groupby('device_id', 'trip_id').agg(F.max('idx').alias('idx_trip_end'))
print raw_data_filtered.columns
['device_id', 'trip_id', 'idx_trip_end']
raw_data_filtered.registerTempTable('raw_data_filtered_temp')
mnvr_temp_idx_prev.registerTempTable('mnvr_temp_idx_prev_temp')
test = sqlContext.sql('SELECT a.device_id, a.idx_trip_end, b.mnvr_bgn, b.mnvr_end \
FROM raw_data_filtered_temp as a \
INNER JOIN mnvr_temp_idx_prev_temp as b \
ON a.device_id = b.device_id')
Traceback (most recent call last): AnalysisException: u"cannot resolve 'b.device_id' given input columns: [_col7, trip_id, device_id, mnvr_end, mnvr_bgn, idx_trip_end]; line 1 pos 237"
Any help is appreciated!
I would recommend renaming the name of the field 'device_id' in at least one of the data frame. I modified your query just a bit and tested it(in scala). Below query works
test = sqlContext.sql("select * FROM raw_data_filtered_temp a INNER JOIN mnvr_temp_idx_prev_temp b ON a.device_id = b.device_id")
[device_id: string, mnvr_bgn: string, mnvr_end: string, device_id: string, trip_id: string, idx_trip_end: string]
Now if you are doing a 'select * ' in above statement, it will work. But if you try to select 'device_id', you will get an error "Reference 'device_id' is ambiguous" . As you can see in the above 'test' data frame definition, it has two fields with the same name(device_id). So to avoid this, I recommend changing field name in one of the dataframes.
mnvr_temp_idx_prev = mnvr_3.select('device_id', 'mnvr_bgn', 'mnvr_end')
.withColumnRenamned("device_id","device")
raw_data_filtered = raw_data.select('device_id', 'trip_id', 'idx').groupby('device_id', 'trip_id').agg(F.max('idx').alias('idx_trip_end'))
Now use dataframes or sqlContext
//using dataframes with multiple conditions
val test = mnvr_temp_idx_prev.join(raw_data_filtered,$"device" === $"device_id"
&& $"mnvr_bgn" < $"idx_trip_id","inner")
//in SQL Context
test = sqlContext.sql("select * FROM raw_data_filtered_temp a INNER JOIN mnvr_temp_idx_prev_temp b ON a.device_id = b.device and a. idx_trip_id < b.mnvr_bgn")
Above queries will work for your problem. And if your data set is too large, I would recommend to not use '>' or '<' operators in Join condition as it causes cross join which is a costly operation if data set is large. Instead use them in WHERE condition.