Facing problem while fetching data form hive table.
Input String : "\u0001d1\u0002d2\u0003"
Here \u0001 = ^A character. similarly \u0002 = ^B character ...
Inserted above string into hive table successfully. Hive DDL query is:
CREATE TABLE test_lt_snap (f1 string) PARTITIONED BY ( date string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' WITH SERDEPROPERTIES ('serialization.encoding'='utf-8') STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION '<file path>' TBLPROPERTIES ( 'store.charset'='utf-8', 'retrieve.charset'='utf-8');
After selecting field f1 through hive CLI i am not able to see '\u0001' char. such as:
hive (test_db) > select f1 from test_lt_snap;
output: d1d2
hive (test_db) > select f1 from test_lt_snap where f1 like '\u0001d1%';
output: d1d2
The problem with above select clause is the \u0001 char are not visible.
Is there any way we can display the chars as well ?
Thanks
Amiya
I am facing an issue on my inserting data.
In fact, I read some csv's files in a dataFrame and store the dataFrame on HDFS like :
val data = spark.read.option("header", "true").option("delimiter", ",").csv("/path_to_csv//*.csv")
data.repartition($"year", $"month", $"day").write.partitionBy("year", "month", "day").mode("overwrite").option("header", "true").option("delimiter", ",").parquet ("/path/to/parquet")
Then I created an external on my stored parquet like :
create external table tab (col1 string, col2 string, col3 int)
partitioned by (year int,month int,day int) stored as parquet
LOCATION 'hdfs://path/to/parquet'
Till here it is OK! But when I do a request on my table :
select * from tab
I have no result.
Does anybody face this issue?
Thanks.
I use scala/ spark to insert data into a Hive parquet table as follows
for(*lots of current_Period_Id*){//This loop is on a result of another query that returns multiple rows of current_Period_Id
val myDf = hiveContext.sql(s"""SELECT columns FROM MULTIPLE TABLES WHERE period_id=$current_Period_Id""")
val count: Int = myDf.count().toInt
if(count>0){
hiveContext.sql(s"""INSERT INTO destinationtable PARTITION(period_id=$current_Period_Id) SELECT columns FROM MULTIPLE TABLES WHERE period_id=$current_Period_Id""")
}
}
This approach takes a lot of time to complete because the select statement is being executed twice.
I'm trying to avoid selecting data twice and one way I've thought of is writing the dataframe myDf to the table directly.
This is the gist of the code I'm trying to use for the purpose
val sparkConf = new SparkConf().setAppName("myApp")
.set("spark.yarn.executor.memoryOverhead","4096")
val sc = new SparkContext(sparkConf)
val hiveContext = new HiveContext(sc)
hiveContext.setConf("hive.exec.dynamic.partition","true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
for(*lots of current_Period_Id*){//This loop is on a result of another query
val myDf = hiveContext.sql("SELECT COLUMNS FROM MULTIPLE TABLES WHERE period_id=$current_Period_Id")
val count: Int = myDf.count().toInt
if(count>0){
myDf.write.mode("append").format("parquet").partitionBy("PERIOD_ID").saveAsTable("destinationtable")
}
}
But I get an error in the myDf.write part.
java.util.NoSuchElementException: key not found: period_id
The destination table is partitioned by period_id.
Could someone help me with this?
The spark version I'm using is 1.5.0-cdh5.5.2.
The dataframe schema and table's description differs from each other. The PERIOD_ID != period_id column name is Upper case in your DF but in UPPER case in table. Try in sql with lowercase the period_id
I wrote a DataFrame as parquet file. And, I would like to read the file using Hive using the metadata from parquet.
Output from writing parquet write
_common_metadata part-r-00000-0def6ca1-0f54-4c53-b402-662944aa0be9.gz.parquet part-r-00002-0def6ca1-0f54-4c53-b402-662944aa0be9.gz.parquet _SUCCESS
_metadata part-r-00001-0def6ca1-0f54-4c53-b402-662944aa0be9.gz.parquet part-r-00003-0def6ca1-0f54-4c53-b402-662944aa0be9.gz.parquet
Hive table
CREATE TABLE testhive
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
'/home/gz_files/result';
FAILED: SemanticException [Error 10043]: Either list of columns or a custom serializer should be specified
How can I infer the meta data from parquet file?
If I open the _common_metadata I have below content,
PAR1LHroot
%TSN%
%TS%
%Etype%
)org.apache.spark.sql.parquet.row.metadataâ–’{"type":"struct","fields":[{"name":"TSN","type":"string","nullable":true,"metadata":{}},{"name":"TS","type":"string","nullable":true,"metadata":{}},{"name":"Etype","type":"string","nullable":true,"metadata":{}}]}
Or how to parse meta data file?
Here's a solution I've come up with to get the metadata from parquet files in order to create a Hive table.
First start a spark-shell (Or compile it all into a Jar and run it with spark-submit, but the shell is SOO much easier)
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.DataFrame
val df=sqlContext.parquetFile("/path/to/_common_metadata")
def creatingTableDDL(tableName:String, df:DataFrame): String={
val cols = df.dtypes
var ddl1 = "CREATE EXTERNAL TABLE "+tableName + " ("
//looks at the datatypes and columns names and puts them into a string
val colCreate = (for (c <-cols) yield(c._1+" "+c._2.replace("Type",""))).mkString(", ")
ddl1 += colCreate + ") STORED AS PARQUET LOCATION '/wherever/you/store/the/data/'"
ddl1
}
val test_tableDDL=creatingTableDDL("test_table",df,"test_db")
It will provide you with the datatypes that Hive will use for each column as they are stored in Parquet.
E.G: CREATE EXTERNAL TABLE test_table (COL1 Decimal(38,10), COL2 String, COL3 Timestamp) STORED AS PARQUET LOCATION '/path/to/parquet/files'
I'd just like to expand on James Tobin's answer. There's a StructField class which provides Hive's data types without doing string replacements.
// Tested on Spark 1.6.0.
import org.apache.spark.sql.DataFrame
def dataFrameToDDL(dataFrame: DataFrame, tableName: String): String = {
val columns = dataFrame.schema.map { field =>
" " + field.name + " " + field.dataType.simpleString.toUpperCase
}
s"CREATE TABLE $tableName (\n${columns.mkString(",\n")}\n)"
}
This solves the IntegerType problem.
scala> val dataFrame = sc.parallelize(Seq((1, "a"), (2, "b"))).toDF("x", "y")
dataFrame: org.apache.spark.sql.DataFrame = [x: int, y: string]
scala> print(dataFrameToDDL(dataFrame, "t"))
CREATE TABLE t (
x INT,
y STRING
)
This should work with any DataFrame, not just with Parquet. (e.g., I'm using this with a JDBC DataFrame.)
As an added bonus, if your target DDL supports nullable columns, you can extend the function by checking StructField.nullable.
A small improvement over Victor (adding quotes on field.name) and modified to bind the table to a local parquet file (tested on spark 1.6.1)
def dataFrameToDDL(dataFrame: DataFrame, tableName: String, absFilePath: String): String = {
val columns = dataFrame.schema.map { field =>
" `" + field.name + "` " + field.dataType.simpleString.toUpperCase
}
s"CREATE EXTERNAL TABLE $tableName (\n${columns.mkString(",\n")}\n) STORED AS PARQUET LOCATION '"+absFilePath+"'"
}
Also notice that:
A HiveContext is needed since SQLContext does not support creating
external table.
The path to the parquet folder must be an absolute path
I would like to expand James answer,
The following code will work for all datatypes including ARRAY, MAP and STRUCT.
Have tested in SPARK 2.2
val df=sqlContext.parquetFile("parquetFilePath")
val schema = df.schema
var columns = schema.fields
var ddl1 = "CREATE EXTERNAL TABLE " tableName + " ("
val cols=(for(column <- columns) yield column.name+" "+column.dataType.sql).mkString(",")
ddl1=ddl1+cols+" ) STORED AS PARQUET LOCATION '/tmp/hive_test1/'"
spark.sql(ddl1)
I had the same question. It might be hard to implement from pratcical side though, as Parquet supports schema evolution:
http://www.cloudera.com/content/www/en-us/documentation/archive/impala/2-x/2-0-x/topics/impala_parquet.html#parquet_schema_evolution_unique_1
For example, you could add a new column to your table and you don't have to touch data that's already in the table. It's only new datafiles will have new metadata (compatible with previous version).
Schema merging is switched off by default since Spark 1.5.0 since it is "relatively expensive operation"
http://spark.apache.org/docs/latest/sql-programming-guide.html#schema-merging
So infering most recent schema may not be as simple as it sounds. Although quick-and-dirty approaches are quite possible e.g. by parsing output from
$ parquet-tools schema /home/gz_files/result/000000_0
Actually, Impala supports
CREATE TABLE LIKE PARQUET
(no columns section altogether):
https://docs.cloudera.com/runtime/7.2.15/impala-sql-reference/topics/impala-create-table.html
Tags of your question have "hive" and "spark" and I don't see this is implemented in Hive, but in case you use CDH, it may be what you were looking for.
my select query is fetching me no rows on a partitioned external table.
i created an external partitioned table audit_test on a location /user/abcdef/audit_table/, i am loading .csv file by creating partitioned directory based on date.
CREATE EXTERNAL TABLE audit_test
(perm_bitmap_txt STRING,
blank_txt STRING,
ownr_id STRING,
ad_grp_txt STRING,
size_bytes_tot INT,
last_mod_dt STRING,
last_mod_tm STRING,
hdfs_phy_loc_txt STRING,
reg_hdfs_loc_txt STRING,
reg_hdfs_grp_txt STRING,
reg_hdfs_comp_txt string)
PARTITIONED BY (data_ext_DT STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION 'user/abcdef/audit_table/';
Now my output location would be /user/abcdef/audit_table/data_ext_dt=20150203/20150203_audit.csv
when i run a simple select query i am getting zero rows
select * from audit_test where data_ext_dt = '20150203'
i have to create the partitions manually by using alter command:
alter table data_sec_audit_rpt_test add partition(data_ext_dt=20150203);
it worked.