`ds.` prefix on Hive tables when accessed via JDBC - postgresql

I have a HiveServer2 running with JDBC connections and it works fine from Impala, Beeline and Spark clients. The metastore is running in a PostgreSQL server.
For example, the columns in Hive are
select * from testdb.test_table limit 3;
dt | val_test | val_test_b | test_c
12 0.2 B C
13 1.2 B A
14 9.4 T C
When I try to access the same tables from ZoomData, all column tables get a ds. prefix that is not in the original table column names:
ds.dt | ds.val_test | ds.val_test_b | ds.test_c
12 0.2 B C
13 1.2 B A
14 9.4 T C
and sometimes, when accessing the data, the Zoomdata JDBC client gives
Error: cannot find `ds.val_test` column.
Error: cannot find `ds_val_test` column.
What could be causing this?

Related

Why is id as SERIAL discontinuous values after failover in RDS Aurora PostgreSQL?

I'm testing failover using RDS Aurora PostgreSQL.
First, create RDS Aurora PostgreSQL and access the writer cluster to create users table.
$ CREATE TABLE users (
id SERIAL PRIMARY KEY NOT NULL,
name varchar(10) NOT NULL,
createAt TIMESTAMP DEFAULT Now() );
And I added one row and checked the table.
$ INSERT INTO users(name) VALUES ('test');
$ SELECT * FROM users;
+----+--------+----------------------------+
| id | name | createdAt |
+----+--------+----------------------------+
| 1 | test | 2022-02-02 23:09:57.047981 |
+----+--------+----------------------------+
After failover of RDS Aurora Cluster, I added another row and checked the table.
$ INSERT INTO users(name) VALUES ('temp');
$ SELECT * FROM users;
+-----+--------+----------------------------+
| id | name | createdAt |
+-----+--------+----------------------------+
| 1 | test | 2022-02-01 11:09:57.047981 |
| 32 | temp | 2022-02-01 11:25:57.047981 |
+-----+--------+----------------------------+
After failover, the id value that should be 2 became 32.
Why is this happening?
Is there any way to solve this problem?
That is to be expected. Index modifications are not WAL logged whenever nextval is called, because that could become a performance bottleneck. Rather, a WAL record is written every 32 calls. That means that the sequence can skip some values after a crash or failover to the standby.
You may want to read my ruminations about gaps in sequences.

Presto To Postgres Not able to query

I have made the connections from presto to postgresql. Describing of table is working but not select query.
Its giving the below error:
select rul_name from fms.fmref_schema.ru_tbl;
Query 20200521_122033_00164_rmrsd, FAILED, 1 node
Splits: 17 total, 0 done (0.00%)
0:00 [0 rows, 0B] [0 rows/s, 0B/s]
Query 20200521_122033_00164_rmrsd failed: com.facebook.presto.plugin.jdbc.JdbcSplit cannot be cast to com.facebook.presto.plugin.jdbc.JdbcSplit

Google SQL query (MySQL)

Sorry, newbie in SQL. I have such table in Google Cloud SQL (MySql).
How can I get time difference between surrounding rows like 164 and 165?
I want to get time period (downtime) when no one sencor worked with the condition that downtime more 20 minutes.
autoID | Datetime | Number_of_sensor
163 | 2020-04-06 13:46:42 | C3
164 | 2020-04-06 13:46:45 | C4
165 | 2020-04-06 15:10:48 | C3
166 | 2020-04-06 15:46:48 | C4
I tried something but cann't get result.
You would have to use something called Window Functions, which are only available on MySQL after version 8.0
Google Cloud SQL for MySQL is only up version 5.7 for now.
However if you use Postgres that will be able to run as a Window Function SQL Query.
You can use the native TIMESTAMPDIFF() function, using the values from 2 tables offset by 1 as arguments, and join those 2 tables. This will work with MySQL 5.7 which is used on Google Cloud SQL.
SELECT a.autoID, a.Datetime , TIMESTAMPDIFF(MINUTE,a.Datetime,COALESCE(b.Datetime,0)) as downtime
FROM test a
LEFT JOIN test b on a.autoID=b.autoID-1
HAVING downtime > 20;

PostgreSQL's table oid is not found in database

I have a question about PostgreSQL's table oid.
I create a table. oid is 24622
(-rw------- 1 postgres postgres 8192 Nov 29 17:45 24622)
and I found also modified files at the same time.
(-rw------- 1 postgres postgres 73728 Nov 29 17:45 12741)
(-rw------- 1 postgres postgres 32768 Nov 29 17:45 12744)
(-rw------- 1 postgres postgres 65536 Nov 29 17:45 12764)
(-rw------- 1 postgres postgres 57344 Nov 29 17:45 12767)
but those tables are not found in same database, no one is found.
ksh2=# select oid,relname from pg_class where oid = '12741';
oid | relname
-----+---------
(0 rows)
How can I find those tables???
(I also change schema, and try to find, but no one is found)
Thank you.
A filename doesn't necessarily correspond with oid:
Note that while a table's filenode often matches its OID, this is not
necessarily the case; some operations, like TRUNCATE, REINDEX, CLUSTER
and some forms of ALTER TABLE, can change the filenode while
preserving the OID.
The filename is stored in the relfilenode column:
Name of the on-disk file of this relation; zero means this is a
"mapped" relation whose disk file name is determined by low-level
state
So try searching for the relfilenode:
select name, relkind from pg_class where relfilenode = 12741;
The relkind column tells you what type of object the file represents:
r = ordinary table
i = index
S = sequence
v = view
c = composite type
t = TOAST table
f = foreign table

connecting to DB2 database:[unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed

odbc.ini:
[DEFAULT]
Driver = DB2
[abc]
Driver = DB2
[dsn_test1]
DESCRIPTION = Connection to DB2
Driver = db2
odbcinst.ini:
[DB2]
Description = DB2 Driver
Driver = /home/user/sqllib/lib/libdb2.so
fileusage=1
dontdlclose=1
[ODBC]
Trace=1
TraceFile=/home/user/sqllib/trace.out
db2cli.ini
[abc]
hostname="hostname"
pwd="passwd"
port="port"
PROTOCOL=TCPIP
database="dbname"
uid="uid"
$ ./isql abc
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
while connecting from db2 driver,below error is coming:
Connection attempt for data source name "abc":
===============================================================================
ODBC Driver Manager Path: /home/user/sqllib/odbclib/lib/libodbc.so
[FAILED]: [unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV
failed
Below is the snippet of odbc trace:
[ODBC][23419][1403783774.660159][SQLConnect.c][1380]Error: IM004
[ODBC][23419][1403783774.660223][SQLError.c][434]
Entry:
Connection = 0x81aaac8
SQLState = 0xffff9593
Native = 0xffff9684
Message Text = 0xffff8d93
Buffer Length = 1024
Text Len Ptr = 0xffff95bc
[ODBC][23419][1403783774.660260][SQLError.c][471]
Exit:[SQL_SUCCESS]
SQLState = IM004
Native = 0xffff9684 -> 0
Message Text = [[unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed]
Googled a lot for root-cause,dint helped much,please provide some pointer to solve this.
its a 32 bit linux machine having 32 bit db driver as well.
According to this IBM Support page, an IM004 SQLState on SQLAllocHandle relates to the new security feature.
Cause
The new security features introduced in DB2® Universal Database™ (DB2
UDB) Version 8.2 prevent users from using the database unless they
belong to the Windows® groups DB2ADMNS or DB2USERS.
Answer
Add the userid (the one used to execute the application) to either the
DB2ADMNS or DB2USERS group. Please refer to the link under "Related
Information" (below) for instructions on how to accomplish this.
Alternatively, there are a number of threads (e.g. Huge problems connecting to a DB2 database) which suggest setting the DB2INSTANCE environment variable to match the instance setting in your odbc.ini file for the DSN concerned, e.g.
export DB2INSTANCE=db2inst1
isql -v FS01DB2