Unable to create Hbase table using Hive query through Spark - scala

Using the following tutorial: https://hadooptutorial.info/hbase-integration-with-hive/, I was able to do the HBase integration with Hive. After the configuration I was successfully able to create Hbase table using Hive query with Hive table mapping.
Hive query:
CREATE TABLE upc_hbt(key string, value string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,value:value")
TBLPROPERTIES ("hbase.table.name" = "upc_hbt");
Spark-Scala:
val createTableHql : String = s"CREATE TABLE upc_hbt2(key string, value string)"+
"STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'"+
"WITH SERDEPROPERTIES ('hbase.columns.mapping' = ':key,value:value')"+
"TBLPROPERTIES ('hbase.table.name' = 'upc_hbt2')"
hc.sql(createTableHql)
But when I execute the same Hive query through Spark it throws the following error:
Exception in thread "main" org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.ql.metadata.HiveException: Error in loading storage handler.org.apache.hadoop.hive.hbase.HBaseStorageHandler
It’s seem like during the Hive execution through Spark it can’t find the auxpath jar location. Is there anyway to solve this problem?
Thank you very much in advance.

Related

How to get Create Statement of Table in some other database in Spark using JDBC

Problem statement:
I have a Impala database where multiple tables are present
I am creating Spark JDBC connection to Impala and loading these tables into spark dataframe for my validations like this which works fine:
val df = spark.read.format("jdbc")
.option("url","url")
.option("dbtable","tablename")
.load()
Now the next step and my actual problem is I need to find the create statement which was used to create the tables in Impala itself
Since I cannot run command like below as it gives error, is there anyway I can fetch the show create statement for tables present in Impala.
val df = spark.read.format("jdbc")
.option("url","url")
.option("dbtable","show create table tablename")
.load()
Perhaps you can use Spark SQL "natively" to execute something like
val createstmt = spark.sql("show create table <tablename>")
The resulting dataframe will have a single column (type string) which contains a complete CREATE TABLE statement.
But, if you still choose to go JDBC route there is always an option to use the good old JDBC interface. Scala understands everything written in Java, after all...
import java.sql.*
Connection conn = DriverManager.getConnection("url")
Statement stmt = conn.createStatement()
ResultSet rs = stmt.executeQuery("show create table <tablename>")
...etc...

How to determine if source is empty?

I have an etl process that is using an athena source. I cannot figure out how to create a data frame if there is no data yet in the source. I was using the GlueContext:
trans_ddf = glueContext.create_dynamic_frame.from_catalog(
database=my_db, table_name=my_table, transformation_ctx="trans_ddf")
This fails if there is no data in the source db, because it can't infer the schema.
I also tried using the sql function on the spark session:
has_rows_df = spark.sql("select cast(count(*) as boolean) as hasRows from my_table limit 1")
has_rows = has_rows_df.collect()[0].hasRows
This also fails because it can't infer the schema.
How can I create a data frame so I can determine if the source has any data?
has_rows_df.head(1).isEmpty
should do the job,robustly.
See How to check if spark dataframe is empty?

Spark unable to read a database table to a dataframe with jdbc using a query as a table name

I'm trying to read a postgres/postgis table into a spark 2.0 dataframe like this.
val jdbcUrl = s"jdbc:postgresql://${host}:${port}/${dbName}"
val connectionProperties = new Properties()
connectionProperties.put("user", s"${user}")
connectionProperties.put("password", s"${password}")
connectionProperties.setProperty("Driver", "org.postgresql.Driver")
def readTable ( table: String ): DataFrame = {
spark.read.jdbc(jdbcUrl, s"(select st_astext(geom) as geom from
${table}) as t;", connectionProperties)
}
readTable("myschema.mytable")
I get this error:
org.postgresql.util.PSQLException: ERROR: syntax error at or near "WHERE"
I'm pretty sure this is caused by a where clause being added to the query as described in this question.
However according to the docs this method should work https://docs.databricks.com/spark/latest/data-sources/sql-databases.html#pushdown-query-to-database-engine
I need to use a query as a table name because I need to get the postgis geometry as a wkt string. My question is, has anyone found a way to read a table with a query as a table name like this? Or does anyone see anything wrong with my code? Or perhaps another way? thanks

Read from a hive table and write back to it using spark sql

I am reading a Hive table using Spark SQL and assigning it to a scala val
val x = sqlContext.sql("select * from some_table")
Then I am doing some processing with the dataframe x and finally coming up with a dataframe y , which has the exact schema as the table some_table.
Finally I am trying to insert overwrite the y dataframe to the same hive table some_table
y.write.mode(SaveMode.Overwrite).saveAsTable().insertInto("some_table")
Then I am getting the error
org.apache.spark.sql.AnalysisException: Cannot insert overwrite into table that is also being read from
I tried creating an insert sql statement and firing it using sqlContext.sql() but it too gave me the same error.
Is there any way I can bypass this error? I need to insert the records back to the same table.
Hi I tried doing as suggested , but still getting the same error .
val x = sqlContext.sql("select * from incremental.test2")
val y = x.limit(5)
y.registerTempTable("temp_table")
val dy = sqlContext.table("temp_table")
dy.write.mode("overwrite").insertInto("incremental.test2")
scala> dy.write.mode("overwrite").insertInto("incremental.test2")
org.apache.spark.sql.AnalysisException: Cannot insert overwrite into table that is also being read from.;
Actually you can also use checkpointing to achieve this. Since it breaks data lineage, Spark is not able to detect that you are reading and overwriting in the same table:
sqlContext.sparkContext.setCheckpointDir(checkpointDir)
val ds = sqlContext.sql("select * from some_table").checkpoint()
ds.write.mode("overwrite").saveAsTable("some_table")
You should first save your DataFrame y in a temporary table
y.write.mode("overwrite").saveAsTable("temp_table")
Then you can overwrite rows in your target table
val dy = sqlContext.table("temp_table")
dy.write.mode("overwrite").insertInto("some_table")
You should first save your DataFrame y like a parquet file:
y.write.parquet("temp_table")
After you load this like:
val parquetFile = sqlContext.read.parquet("temp_table")
And finish you insert your data in your table
parquetFile.write.insertInto("some_table")
In context to Spark 2.2
This error means that our process is reading from same table and writing to same table.
Normally, this should work as process writes to directory .hiveStaging...
This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions.
This error should not occur with insertInto method, as it overwrites partitions not the table.
A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto method if you remove following Spark TBLProperties -
'spark.sql.partitionProvider' 'spark.sql.sources.provider'
'spark.sql.sources.schema.numPartCols
'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0'
'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2'
'spark.sql.sources.schema.partCol.0'
'spark.sql.sources.schema.partCol.1'
https://querydb.blogspot.com/2019/07/read-from-hive-table-and-write-back-to.html
when we upgraded our HDP to 2.6.3 The Spark was updated from 2.2 to 2.3 which resulted in below error -
Caused by: org.apache.spark.sql.AnalysisException: Cannot overwrite a path that is also being read from.;
at org.apache.spark.sql.execution.command.DDLUtils$.verifyNotReadPath(ddl.scala:906)
This error occurs for job where-in we are reading and writing to same path. Like Jobs with SCD Logic
Solution -
Set --conf "spark.sql.hive.convertMetastoreOrc=false"
or, update the job such that it writes data to a temporary table. Then reads from temporary table and insert it into final table.
https://querydb.blogspot.com/2020/09/orgapachesparksqlanalysisexception.html
Read the data from hive table in spark:
val hconfig = new org.apache.hadoop.conf.Configuration()
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(hconfig , "dbname", "tablename")
val inputFormat = (new HCatInputFormat).asInstanceOf[InputFormat[WritableComparable[_],HCatRecord]].getClass
val data = sc.newAPIHadoopRDD(hconfig,inputFormat,classOf[WritableComparable[_]],classOf[HCatRecord])
You'll also get the Error: "Cannot overwrite a path that is also being read from" in a case where your are doing this:
You are "insert overwrite" to a hive TABLE "A" from a VIEW "V" (that executes your logic)
And that VIEW also references the same TABLE "A". I found this the hard way as the VIEW is deeply nested code that was querying "A" as well. Bummer.
It is like cutting the very branch on which you are sitting :-(
What you need to keep in mind before doing below is that the hive table in which you are overwriting should be have been created by hive DDL not by
spark(df.write.saveAsTable("<table_name>"))
if the above is not true this wont work.
I tested this in spark 2.3.0
val tableReadDf=spark.sql("select * from <dbName>.<tableName>")
val updatedDf=tableReadDf.<transformation> //any update/delete/addition
updatedDf.createOrReplaceTempView("myUpdatedTable")
spark.sql("""with tempView as(select * from myUpdatedTable) insert overwrite table
<dbName>.<tableName> <partition><partition_columns> select * from tempView""")
This is good solution for me:
Extract RDD and schema from DataFrame.
Create new clone DataFame.
Overwrite table.
private def overWrite(df: DataFrame): Unit = {
val schema = df.schema
val rdd = df.rdd
val dfForSave = spark.createDataFrame(rdd, schema)
dfForSave.write
.mode(SaveMode.Overwrite)
.insertInto(s"${tableSource.schema}.${tableSource.table}")}

ApacheSpark: Unsupported parquet datatype

I'm trying to read Hive table with SparkSql HiveContext. But, when I submit the job, I get the following error:
Exception in thread "main" java.lang.RuntimeException: Unsupported parquet datatype optional fixed_len_byte_array(11) amount (DECIMAL(24,7))
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.parquet.ParquetTypesConverter$.toPrimitiveDataType(ParquetTypes.scala:77)
at org.apache.spark.sql.parquet.ParquetTypesConverter$.toDataType(ParquetTypes.scala:131)
at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$convertToAttributes$1.apply(ParquetTypes.scala:383)
at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$convertToAttributes$1.apply(ParquetTypes.scala:380)
Column type is DECIMAL(24,7). I've changed column type with HiveQL, but it doesn't work. Also I've tried cast to another Decimal type in sparksql like below:
val results = hiveContext.sql("SELECT cast(amount as DECIMAL(18,7)), number FROM dmp_wr.test")
But, I got same error. My code is like that:
def main(args: Array[String]) {
val conf: SparkConf = new SparkConf().setAppName("TColumnModify")
val sc: SparkContext = new SparkContext(conf)
val vectorAcc = sc.accumulator(new MyVector())(VectorAccumulator)
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val results = hiveContext.sql("SELECT amount, number FROM dmp_wr.test")
How can i solve this problem? Thank you for your response.
Edit1: I found the Spark source line which thrown exception. It looks like that
if(originalType == ParquetOriginalType.DECIMAL && decimalInfo.getPrecision <= 18)
So, I created new table which has column in DECIMAL(18,7) type and my code works as I expected.
I drop table and create new one which has column in DECIMAL(24,7), after that I changed column type
alter table qwe change amount amount decimal(18,7) and I can see It is changed to DECIMAL(18,7), but Spark
doesn't accept change. It still read column type as DECIMAL(24,7) and give same error.
What can be the main reason?
alter table qwe change amount amount decimal(18,7)
Alter table commands in Hive does not touch the actual data that is stored in Hive. It only changes the metadata in Hive Metastore. This is very different from "alter table" commands in normal databases (like MySQL).
When Spark reads data from Parquet files, it will try to use the metadata in the actual Parquet file to deserialize the data, which will still give it DECIMAL(24, 7).
There are 2 solutions to your problem:
1. Try out a new version of Spark - build from trunk. See https://issues.apache.org/jira/browse/SPARK-6777 which totally changes this part of the code (will only be in Spark 1.5 though), so hopefully you won't see the same problem again.
Convert the data in your table manually. You can use hive query like "INSERT OVERWRITE TABLE new_table SELECT * from old_table") to do it.

Categories