Metabase - how to find question id or dashboard id from queryHash - amazon-redshift

We are using Metabase to query Amazon Redshift DB.
In AWS UI queries section I can see Metabase user ID and corresponding queryHash.
-- Metabase:: userID: 201 queryType: native queryHash: e0f499f3bc109a46cdf4e686fa06fb6379ec01265640ecc3f45365349b7c6e3f
Now, I have to find out what is the question id and dashboard id for this queryHash? Is this possible, If yes how?
Thanks in Advance!!

The simplest way that I found was by querying Metabase database. QueryHash column can be found in query and query_execution tables. It seems that only query_execution table has a reference to question/card.
SELECT
card_id
FROM
query_execution
WHERE
ENCODE(hash, 'hex') = 'e0f499f3bc109a46cdf4e686fa06fb6379ec01265640ecc3f45365349b7c6e3f'

Related

What query can I use to distinguish Cloud SQL from Postgresql?

Is there a simple query that will let me distinguish Cloud SQL from stock PostgreSQL? Maybe something like select version() or select current_setting('server_version')?
(I don't have access to a Cloud SQL instance to experiment.)
For the most part, things are the same. You could try looking for the CLOUDSQLSUPERUSER role, which wouldn't existing on regular postgres (unless you or someone else has added it).
EDIT: added #enocom's suggestions for a query to do this:
select * from pg_catalog.pg_user where usename = 'cloudsqlsuperuser';

How to get Innodb_rows_inserted and Innodb_rows_deleted values in postgresql

In Mysql I am getting Innodb_rows_inserted and Innodb_rows_deleted values using these queries
select variable from information_schema.global_status where variable_name = 'innodb_rows_inserted'
Same thing I want to achieve in postgresql. Innodb is not used in postgres so I want to get some queries to fetch total inserted and deleted rows count .Can anyone help me how to get Innodb_rows_inserted and Innodb_rows_deleted values in postgresql. Thanks in advance

After database migration to new DB2/400 server, the table and column labels are no longer accessible. What server settings to enable..?

We have a 3rd-party DB2/400 application that's the core of our business. It was recently migrated from our private server with AS400/i v6r1 on Power7 to a hosted cloud service with AS400/i v7r3 on Power9.
Since the migration, SQL clients cannot see TABLE_TEXT or COLUMN_TEXT when browsing tables in whatever sort of database explorer they have. In most cases, the text is supposed to show up under "Remarks" or "Description" when browsing tables or columns in the explorer, but it no longer does.
Even the IBM Data Studio won't show the data in columns, but it does provide the information buried deep and inconvenient to access.
What DB2 Server settings are involved in providing this metadata to SQL clients..?? I've searched the IBM website, but the mountains of answers are overwhelming.
I would like to be fore-armed with this information before I discuss the issue with our hosting provider. They provide the ODBC/JDBC connection "mostly unsupported", but I'm hoping they'll consider helping us with this issue if I can describe the server settings with as much detail as possible.
To be clear, what I'm looking for is the labels from the DDL statements, such as these:
LABEL ON TABLE "SCHEMA1"."TABLE1" IS 'Some Table Description';
LABEL ON COLUMN "SCHEMA1"."TABLE1"."COLUMN1" IS 'Some Column Desc';
The clients may not access the labels, yet the following SQL queries are able to do so:
SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_TEXT
FROM QSYS2.SYSTABLES
WHERE TABLE_SCHEMA = 'SCHEMA1'
AND TABLE_NAME = 'TABLE1'
SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, COLUMN_TEXT
FROM QSYS2.SYSCOLUMNS
WHERE TABLE_SCHEMA = 'SCHEMA1'
AND TABLE_NAME = 'TABLE1'
I've tried the clients and drivers listed below, and none of them can access the labels for tables or columns. I've read many posts on StackOverflow and elsewhere, and tried many tweaks of settings in the clients and drivers, but nothing has worked. It seems clear this is an issue on the new server.
Clients:
DBeaver 5.2.5 (my preferred client) (very)
Squirrel SQL 3.8.1
SQL Workbench 124
IBM Data Studio 4.1.3
Drivers:
JTOpen 6.6
JTOpen 7.6 (with recent download of IBM Data Studio)
JTOpen 9.5
I posted this question in the IBM forums, and received the answer I needed:
table and column labels are no longer accessible to JDBC clients
The solution is to set the JDBC driver property as follows:
metadata source = 0
With this change, the other properties seem to not be necessary for my situation. After setting the metadata source property, I made test-changes to the other two, but I didn't see any obvious difference:
remarks = true
extended metadata = true
With SquirrelSQL 3.9 and JtOpen, you have to select two options in the driver properties:
remarks = true
extended metadata = true
In new session configuration, check SQL / Display metadata, and voilĂ  :
Checked with V7R1, with DDS comments or SQL Labels
ODBC/JDBC use a different set of catalogs...located in the SYSIBM schema...
sysibm.sqltables
sysibm.sqlcolumns
ect...
ODBC and JDBC Catalog views

In Postgres SQL Server 8.4 , how to get number of request times to each table?

In Postgres SQL Server 8.4 how to get number of request time to each tables?
For example , what I want is like that
Table_Name request_time
person 50
department 20
Plz give me some guideLine.
You want to use pg_stat_statements and/or csv format logging with log_statement = all or log_min_duration_statement = 0.
There is no way to get statement statistics in a queryable form retroactively. pgFouine can help analyse logs, but only if you've configured PostgreSQL to produce detailed logs.
You probably also want to read about the statistics collector and associated views, which will help provide things like table- and index-utilisation data.

Running a pre-indexer script in Sphinx to copy data from one DB to another

I'm using Sphinx to index a few tables on a Postgres DB. It seems that Sphinx has a pre-index query that you can execute:
http://sphinxsearch.com/docs/manual-0.9.9.html#conf-sql-query-pre
Can this be used to bulk copy-insert data from another DB over a DB link into the DB that I'm indexing?
INSERT INTO table1 SELECT * FROM dblink('dbname=db2', 'SELECT * FROM table2 WHERE id = 10') AS t1(id integer, name varchar, address varchar);
Most of the examples I can see are used for settings session parameters so I'm sort of doubtful.
Cheers!
Yes, sql_query_pre can be used to do anything that SQL can do with your data.