As I was trying to see the data from clickhouse as a graph in grafana...I tried a lot with query processing but I couldn't able to get points on grafana..my table looks like
CREATE TABLE m_psutilinfo (timestamp String, namespace String, data Float, unit String, plugin_running_on String, version UInt64, last_advertised_time String) ENGINE = Kafka('10.224.54.99:9092', 'psutilout', 'group3', 'JSONEachRow');
CREATE TABLE m_psutilinfo_t (timestamp DateTime,namespace String,data Float,unit String,plugin_running_on String,version UInt64,last_advertised_time String,DAY Date)ENGINE = MergeTree PARTITION BY DAY ORDER BY (DAY, timestamp) SETTINGS index_granularity = 8192;
CREATE MATERIALIZED VIEW m_psutilinfo_view TO m_psutilinfo_t AS SELECT toDateTime(substring(timestamp, 1, 19)) AS timestamp, namespace, data, unit, plugin_running_on, version, last_advertised_time, toDate(timestamp) AS DAY FROM m_psutilinfo;
these are the tables I created in clickhouse....what should be my query in grafana for getting data as a graph?
SELECT
$timeSeries as t,
count()
FROM $table
WHERE $timeFilter
GROUP BY t
ORDER BY t
I used tabix but wanted in grafana
Related
org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.UnsupportedOperationException: Parquet does not support
timestamp. See HIVE-6384;
Getting above error while executing following code in Azure Databricks.
spark_session.sql("""
CREATE EXTERNAL TABLE IF NOT EXISTS dev_db.processing_table
(
campaign STRING,
status STRING,
file_name STRING,
arrival_time TIMESTAMP
)
PARTITIONED BY (
Date DATE)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION "/mnt/data_analysis/pre-processed/"
""")
As per Hive-6384 Jira, Starting from Hive-1.2 you can use Timestamp,date types in parquet tables.
Workarounds for Hive < 1.2 version:
1. Using String type:
CREATE EXTERNAL TABLE IF NOT EXISTS dev_db.processing_table
(
campaign STRING,
status STRING,
file_name STRING,
arrival_time STRING
)
PARTITIONED BY (
Date STRING)
Stored as parquet
Location '/mnt/data_analysis/pre-processed/';
Then while processing you can cast arrival_time,Date cast to timestamp,date types.
Using a view and cast the columns but views are slow.
2. Using ORC format:
CREATE EXTERNAL TABLE IF NOT EXISTS dev_db.processing_table
(
campaign STRING,
status STRING,
file_name STRING,
arrival_time Timestamp
)
PARTITIONED BY (
Date date)
Stored as orc
Location '/mnt/data_analysis/pre-processed/';
ORC supports both timestamp,date type
I have two hive clustered tables t1 and t2
CREATE EXTERNAL TABLE `t1`(
`t1_req_id` string,
...
PARTITIONED BY (`t1_stats_date` string)
CLUSTERED BY (t1_req_id) INTO 1000 BUCKETS
// t2 looks similar with same amount of buckets
The insert part happens in hive
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
insert overwrite table `t1` partition(t1_stats_date,t1_stats_hour)
select *
from t1_raw
where t1_stats_date='2020-05-10' and t1_stats_hour='12' AND
t1_req_id is not null
The code looks like as following:
val t1 = spark.table("t1").as[T1]
val t2= spark.table("t2").as[T2]
val outDS = t1.joinWith(t2, t1("t1_req_id) === t2("t2_req_id), "fullouter")
.map { case (t1Obj, t2Obj) =>
val t3:T3 = // do some logic
t3
}
outDS.toDF.write....
I see projection in DAG - but it seems that the job still does full data shuffle
Also, while looking into the logs of executor I don't see it reads the same bucket of the two tables in one chunk - that what I would expect to find
There are spark.sql.sources.bucketing.enabled, spark.sessionState.conf.bucketingEnabled and
spark.sql.join.preferSortMergeJoin flags
What am I missing? and why is there still full shuffle, if there are bucketed tables?
The current spark version is 2.3.1
One possibility here to check for is if you have a type mismatch. E.g. if the type of the join column is string in T1 and BIGINT in T2. Even if the types are both integer (e.g. one is INT, another BIGINT) Spark will still add shuffle here because different types use different hash functions for bucketing.
I have a hive table details with below schema
name STRING,
address STRING,
dob DATE
My dob is stored in yyyy-mm-dd format.like 1988-01-27.
I am trying to load this elastic search table . So i followed below instruction in HUE.
CREATE EXTERNAL TABLE sampletable (name STRING, address STRING, dob DATE)
ROW FORMAT SERDE 'org.elasticsearch.hadoop.hive.EsSerDe'
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler' TBLPROPERTIES('es.resource' = 'test4/test4','es.nodes' = 'x.x.x.x:9200');
INSERT OVERWRITE TABLE sampletable SELECT * FROM details;
select * from sample table;
But DOB field shows NULL for all column. Whereas I can verify that my original hive table has data in date field.
After some research I was able to find that Elasticsearch expects data field to be in yyyy-mm-ddThh:mm:zz since my data doesn't match that it throws error. And also it mentioned, I can change the format to "strict_date" format, then it will work fine my hive date format. But I am not sure where in hive query i execute I need to metion this.
Can some one help me with this?
date type mapping to hive have some problem .
you can use hive string type mapping es date type , but you must set the config for hive table for parameter: es.mapping.date.rich , set it's value is false . like this 'es.mapping.date.rich' = 'false' , in create table statement ,it is:
CREATE EXTERNAL TABLE temp.data_index_es(
id bigint,
userId int,
createTime string
)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES(
'es.nodes' = 'xxxx:9200',
'es.index.auto.create' = 'false',
'es.resource' = 'abc/{_type}',
'es.mapping.date.rich' = 'false',
'es.read.metadata' = 'true',
'es.mapping.id' = 'id',
'es.mapping.names' = 'id:id, userId:userId, createTime:createTime');
refer link: Mapping and Types
I have a table from which i get data as follows:
val all = sc.cassandraTable("keyspace","table")
.select("key_name", "column1", "column2", "column3", "date")
.as((i:String, p:String, e:String, c:Double, d:java.util.Date) => ((i), (c, p, e, d)))
Table is ordered by date. I want to fetch data in a way that for every key_name I will have specified number of records. I don't know if it's achievable in query to cassandra table or it should be done after data is loaded from table. For example I would like to have five latest records for each key_name grouped in some sort of sorted collection.
you can use limit on cluster column like below from the link
SELECT * FROM playlists WHERE id = 62c36092-82a1-3a00-93d1-46196ee77204
ORDER BY song_order DESC LIMIT 50;
https://docs.datastax.com/en/cql/3.1/cql/ddl/ddl_compound_keys_c.html
my select query is fetching me no rows on a partitioned external table.
i created an external partitioned table audit_test on a location /user/abcdef/audit_table/, i am loading .csv file by creating partitioned directory based on date.
CREATE EXTERNAL TABLE audit_test
(perm_bitmap_txt STRING,
blank_txt STRING,
ownr_id STRING,
ad_grp_txt STRING,
size_bytes_tot INT,
last_mod_dt STRING,
last_mod_tm STRING,
hdfs_phy_loc_txt STRING,
reg_hdfs_loc_txt STRING,
reg_hdfs_grp_txt STRING,
reg_hdfs_comp_txt string)
PARTITIONED BY (data_ext_DT STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION 'user/abcdef/audit_table/';
Now my output location would be /user/abcdef/audit_table/data_ext_dt=20150203/20150203_audit.csv
when i run a simple select query i am getting zero rows
select * from audit_test where data_ext_dt = '20150203'
i have to create the partitions manually by using alter command:
alter table data_sec_audit_rpt_test add partition(data_ext_dt=20150203);
it worked.