I'm trying to read Hive table with SparkSql HiveContext. But, when I submit the job, I get the following error:
Exception in thread "main" java.lang.RuntimeException: Unsupported parquet datatype optional fixed_len_byte_array(11) amount (DECIMAL(24,7))
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.parquet.ParquetTypesConverter$.toPrimitiveDataType(ParquetTypes.scala:77)
at org.apache.spark.sql.parquet.ParquetTypesConverter$.toDataType(ParquetTypes.scala:131)
at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$convertToAttributes$1.apply(ParquetTypes.scala:383)
at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$convertToAttributes$1.apply(ParquetTypes.scala:380)
Column type is DECIMAL(24,7). I've changed column type with HiveQL, but it doesn't work. Also I've tried cast to another Decimal type in sparksql like below:
val results = hiveContext.sql("SELECT cast(amount as DECIMAL(18,7)), number FROM dmp_wr.test")
But, I got same error. My code is like that:
def main(args: Array[String]) {
val conf: SparkConf = new SparkConf().setAppName("TColumnModify")
val sc: SparkContext = new SparkContext(conf)
val vectorAcc = sc.accumulator(new MyVector())(VectorAccumulator)
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val results = hiveContext.sql("SELECT amount, number FROM dmp_wr.test")
How can i solve this problem? Thank you for your response.
Edit1: I found the Spark source line which thrown exception. It looks like that
if(originalType == ParquetOriginalType.DECIMAL && decimalInfo.getPrecision <= 18)
So, I created new table which has column in DECIMAL(18,7) type and my code works as I expected.
I drop table and create new one which has column in DECIMAL(24,7), after that I changed column type
alter table qwe change amount amount decimal(18,7) and I can see It is changed to DECIMAL(18,7), but Spark
doesn't accept change. It still read column type as DECIMAL(24,7) and give same error.
What can be the main reason?
alter table qwe change amount amount decimal(18,7)
Alter table commands in Hive does not touch the actual data that is stored in Hive. It only changes the metadata in Hive Metastore. This is very different from "alter table" commands in normal databases (like MySQL).
When Spark reads data from Parquet files, it will try to use the metadata in the actual Parquet file to deserialize the data, which will still give it DECIMAL(24, 7).
There are 2 solutions to your problem:
1. Try out a new version of Spark - build from trunk. See https://issues.apache.org/jira/browse/SPARK-6777 which totally changes this part of the code (will only be in Spark 1.5 though), so hopefully you won't see the same problem again.
Convert the data in your table manually. You can use hive query like "INSERT OVERWRITE TABLE new_table SELECT * from old_table") to do it.
Related
I'm using pyspark with HiveWarehouseConnector in HDP3 cluster.
There was a change in the schema so I updated my target table using the "alter table" command and added the new columns to the last positions of it by default.
Now I'm trying to use the following code to save spark dataframe to it but the columns in the dataframe have alphabetical order and i'm getting the error message below
df = spark.read.json(df_sub_path)
hive.setDatabase('myDB')
df.write.format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector").mode('append').option('table','target_table').save()
and the error message taced to:
Caused by: java.lang.IllegalArgumentException: Hive column:
column_x cannot be found at same index: 77 in
dataframe. Found column_y. Aborting as this may lead to
loading of incorrect data.
Is there any dynamic way of appending the dataframe to correct location in the hive table? It is important as I expect more columns to be added to the target table.
You can read the target column without rows to get the columns. Then, using select, you can order the column correctly and append it:
target = hive.executeQuery('select * from target_Table where 1=0')
test = spark.createDataFrame(source.collect())
test = test.select(target.columns)
I have an etl process that is using an athena source. I cannot figure out how to create a data frame if there is no data yet in the source. I was using the GlueContext:
trans_ddf = glueContext.create_dynamic_frame.from_catalog(
database=my_db, table_name=my_table, transformation_ctx="trans_ddf")
This fails if there is no data in the source db, because it can't infer the schema.
I also tried using the sql function on the spark session:
has_rows_df = spark.sql("select cast(count(*) as boolean) as hasRows from my_table limit 1")
has_rows = has_rows_df.collect()[0].hasRows
This also fails because it can't infer the schema.
How can I create a data frame so I can determine if the source has any data?
has_rows_df.head(1).isEmpty
should do the job,robustly.
See How to check if spark dataframe is empty?
I'm trying to read a postgres/postgis table into a spark 2.0 dataframe like this.
val jdbcUrl = s"jdbc:postgresql://${host}:${port}/${dbName}"
val connectionProperties = new Properties()
connectionProperties.put("user", s"${user}")
connectionProperties.put("password", s"${password}")
connectionProperties.setProperty("Driver", "org.postgresql.Driver")
def readTable ( table: String ): DataFrame = {
spark.read.jdbc(jdbcUrl, s"(select st_astext(geom) as geom from
${table}) as t;", connectionProperties)
}
readTable("myschema.mytable")
I get this error:
org.postgresql.util.PSQLException: ERROR: syntax error at or near "WHERE"
I'm pretty sure this is caused by a where clause being added to the query as described in this question.
However according to the docs this method should work https://docs.databricks.com/spark/latest/data-sources/sql-databases.html#pushdown-query-to-database-engine
I need to use a query as a table name because I need to get the postgis geometry as a wkt string. My question is, has anyone found a way to read a table with a query as a table name like this? Or does anyone see anything wrong with my code? Or perhaps another way? thanks
When I run the following:
val df1 = sqlContext.read.format("orc").load(myPath)
df1.columns.map(m => println(m))
The columns are printed as '_col0', '_col1', '_col2' etc. As opposed to their real names such as 'empno', 'name', 'deptno'.
When I 'describe mytable' in Hive it prints the column name correctly, but when I run 'orcfiledump' it shows _col0, _col1, _col2 as well. Do I have to specify 'schema on read' or something? If yes, how do I do that in Spark/Scala?
hive --orcfiledump /apps/hive/warehouse/mydb.db/mytable1
.....
fieldNames: "_col0"
fieldNames: "_col1"
fieldNames: "_col2"
Note: I created the table as follows:
create table mydb.mytable1 (empno int, name VARCHAR(20), deptno int) stored as orc;
Note: This is not a duplicate of this issue (Hadoop ORC file - How it works - How to fetch metadata) because the answer tells me to use 'Hive' & I am already using HiveContext as follows:
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
By the way, I am using my own hive-site.xml, which contains following:
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://sandbox.hortonworks.com:9083</value>
</property>
</configuration>
I figured out what the problem was. It was the way I was creating the test data. I was under the impression that if I run the following commands:
create table mydb.mytable1 (empno int, name VARCHAR(20), deptno int) stored as orc;
INSERT INTO mydb.mytable1(empno, name, deptno) VALUES (1, 'EMP1', 100);
INSERT INTO mydb.mytable1(empno, name, deptno) VALUES (2, 'EMP2', 50);
INSERT INTO mydb.mytable1(empno, name, deptno) VALUES (3, 'EMP3', 200);
Data would be created in the ORC format at: /apps/hive/warehouse/mydb.db/mytable1
Turns out that's not the case. Even though I indicated 'stored as orc' the INSERT statements didn't save the column information. Not sure if that's expected behavior. In any case, it all works now. Apologies for the confusion but hopefully this will help someone in future -:)
#DilTeam
This is the problem, when you are writing data with Hive(version 1.x), it does not store columns' metadata for orc formatted files (it's not same for parquet etc.) , this issue is fixed in new Hive(2.x) to store column info in metadata that allow spark to read metadata from file itself.
Here is another option to load tables written with Hive1 in spark:
val table = spark.table(<db.tablename>)
Here spark is default sparkSession which fetches table's information from hive metastore.
One more option comes with more codeblock and perquisite information:
Create dataframe with defined schema over fetched RDD, this will give u flexibility to change data types,you can read in this link
https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#programmatically-specifying-the-schema
I hope this will help
I am reading a Hive table using Spark SQL and assigning it to a scala val
val x = sqlContext.sql("select * from some_table")
Then I am doing some processing with the dataframe x and finally coming up with a dataframe y , which has the exact schema as the table some_table.
Finally I am trying to insert overwrite the y dataframe to the same hive table some_table
y.write.mode(SaveMode.Overwrite).saveAsTable().insertInto("some_table")
Then I am getting the error
org.apache.spark.sql.AnalysisException: Cannot insert overwrite into table that is also being read from
I tried creating an insert sql statement and firing it using sqlContext.sql() but it too gave me the same error.
Is there any way I can bypass this error? I need to insert the records back to the same table.
Hi I tried doing as suggested , but still getting the same error .
val x = sqlContext.sql("select * from incremental.test2")
val y = x.limit(5)
y.registerTempTable("temp_table")
val dy = sqlContext.table("temp_table")
dy.write.mode("overwrite").insertInto("incremental.test2")
scala> dy.write.mode("overwrite").insertInto("incremental.test2")
org.apache.spark.sql.AnalysisException: Cannot insert overwrite into table that is also being read from.;
Actually you can also use checkpointing to achieve this. Since it breaks data lineage, Spark is not able to detect that you are reading and overwriting in the same table:
sqlContext.sparkContext.setCheckpointDir(checkpointDir)
val ds = sqlContext.sql("select * from some_table").checkpoint()
ds.write.mode("overwrite").saveAsTable("some_table")
You should first save your DataFrame y in a temporary table
y.write.mode("overwrite").saveAsTable("temp_table")
Then you can overwrite rows in your target table
val dy = sqlContext.table("temp_table")
dy.write.mode("overwrite").insertInto("some_table")
You should first save your DataFrame y like a parquet file:
y.write.parquet("temp_table")
After you load this like:
val parquetFile = sqlContext.read.parquet("temp_table")
And finish you insert your data in your table
parquetFile.write.insertInto("some_table")
In context to Spark 2.2
This error means that our process is reading from same table and writing to same table.
Normally, this should work as process writes to directory .hiveStaging...
This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions.
This error should not occur with insertInto method, as it overwrites partitions not the table.
A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto method if you remove following Spark TBLProperties -
'spark.sql.partitionProvider' 'spark.sql.sources.provider'
'spark.sql.sources.schema.numPartCols
'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0'
'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2'
'spark.sql.sources.schema.partCol.0'
'spark.sql.sources.schema.partCol.1'
https://querydb.blogspot.com/2019/07/read-from-hive-table-and-write-back-to.html
when we upgraded our HDP to 2.6.3 The Spark was updated from 2.2 to 2.3 which resulted in below error -
Caused by: org.apache.spark.sql.AnalysisException: Cannot overwrite a path that is also being read from.;
at org.apache.spark.sql.execution.command.DDLUtils$.verifyNotReadPath(ddl.scala:906)
This error occurs for job where-in we are reading and writing to same path. Like Jobs with SCD Logic
Solution -
Set --conf "spark.sql.hive.convertMetastoreOrc=false"
or, update the job such that it writes data to a temporary table. Then reads from temporary table and insert it into final table.
https://querydb.blogspot.com/2020/09/orgapachesparksqlanalysisexception.html
Read the data from hive table in spark:
val hconfig = new org.apache.hadoop.conf.Configuration()
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(hconfig , "dbname", "tablename")
val inputFormat = (new HCatInputFormat).asInstanceOf[InputFormat[WritableComparable[_],HCatRecord]].getClass
val data = sc.newAPIHadoopRDD(hconfig,inputFormat,classOf[WritableComparable[_]],classOf[HCatRecord])
You'll also get the Error: "Cannot overwrite a path that is also being read from" in a case where your are doing this:
You are "insert overwrite" to a hive TABLE "A" from a VIEW "V" (that executes your logic)
And that VIEW also references the same TABLE "A". I found this the hard way as the VIEW is deeply nested code that was querying "A" as well. Bummer.
It is like cutting the very branch on which you are sitting :-(
What you need to keep in mind before doing below is that the hive table in which you are overwriting should be have been created by hive DDL not by
spark(df.write.saveAsTable("<table_name>"))
if the above is not true this wont work.
I tested this in spark 2.3.0
val tableReadDf=spark.sql("select * from <dbName>.<tableName>")
val updatedDf=tableReadDf.<transformation> //any update/delete/addition
updatedDf.createOrReplaceTempView("myUpdatedTable")
spark.sql("""with tempView as(select * from myUpdatedTable) insert overwrite table
<dbName>.<tableName> <partition><partition_columns> select * from tempView""")
This is good solution for me:
Extract RDD and schema from DataFrame.
Create new clone DataFame.
Overwrite table.
private def overWrite(df: DataFrame): Unit = {
val schema = df.schema
val rdd = df.rdd
val dfForSave = spark.createDataFrame(rdd, schema)
dfForSave.write
.mode(SaveMode.Overwrite)
.insertInto(s"${tableSource.schema}.${tableSource.table}")}