Integrate Redash metadata with Snowflake - metadata

I have integrated Redash with Snowflake as one of the data sources.
We get value of parameters such as query_id, query and user_name in Redash metadata table events which gives us all the metadata related to queries executed on Redash historically.
Snowflake also stores the queries meta data in a table called QUERY_HISTORY which has information related to queries executed on snowflake historically. It has user_name and role_name fields but both of them store the name of Redash user which is associated with snowflake.
The information about which logged in user executed the query is missing. Also, there is no way (maybe some common identifier - query id) in any of the metadata information to map the queries in Redash meta data with the queries executed on snowflake.
Let me know if I am missing something or if there's some way we could achieve the same.
Redash version: 7.0.0

Related

How set parameters in SQL Server table from Copy Data Activity - Source: XML / Sink: SQL Server Table / Mapping: XML column

I have a question, hopefully someone in the forum could give some help here. I am able to pull data from Soap API call to SQL Server table (xml data type field actually) via Copy Data Activity. The pipeline that runs this process is metadata driven, so how could I write other parameters in the same SQL Server table for the same run? I am using a Copy Data Activity to load XML data to SQL Server table but in Mapping tab I am not able to select other parameters in order to point them to others SQL table columns.
In addition, I am using a ForEach Activity in order the Copy Data Activity iterates for several values of one column on SQL Server table.
I will appreciate any advice on this.
Thanks
David
Thank you for your interest, I will try to be more explicit with this image: Hopefully this clarify a little bit. Given the current escenario, how could I pass StoreId and CustomerNumber parameters to the table Stage.XmlDataTable?
Taking in to account in the mapping step I am just able to map XML data from the current API call and then write it into Stage.XmlDataTable - XmlData column.
Thanks in advance David
You can add your parameters using Additional Columns in the Copy data activity Source.
When you import schema in mapping you can see the additional columns added in source.
Refer to this MS document for more details on adding additional columns during the copy.

How to execute graphql query for a specific schema in hasura?

As it can be seen in the following screenshot, the current project database (postgresql)
named default has these 4 schema - public, appcompany1, appcompany2 and appcompany3.
They share some common tables. Right now, when I want to fetch data for customers, I write a query like this:
query getCustomerList {
customer {
customer_id
...
...
}
}
And it fetches the required data from public schema.
But according to the requirements, depending on user interactions in front-end, that query will be executed for appcompanyN (N=1,2,3,..., any positive integer). How do I achieve this goal?
NOTE: Whenever the user creates a new company, a new schema is created for that company. So the total number of schema is not limited to 4.
I suspect that you see a problem where it does not exists actually.
Everything is much simpler than maybe it seems.
A. Where all those tables?
There are a lot of schemas with identical (or almost identical) objects inside them.
All tables are registered in hasura.
Hasura can't register different tables with the same name, so by default names will be [schema_name]_[table_name] (except for public)
So table customer will be registered as:
customer (from public)
appcompany1_customer
appcompany2_customer
appcompany3_customer
It's possible to customize entity name in GraphQL-schema with "Custom GraphQL Root Fields".
B. The problem
But according to the requirements, depending on user interactions in front-end, that query will be executed for appcompanyN (N=1,2,3,..., any positive integer). How do I achieve this goal?
There are identical objects that differs only with prefixes with schema name.
So solutions are trivial
1. Dynamic GraphQL query
Application stores templates of GraphQL-queries and replaces prefix with real schema name before request.
E.g.
query getCustomerList{
[schema]_customer{
}
}
substitute [schema] with appcompany1, appcompany2, appcompanyZ and execute.
2. SQL view for all data
If tables are 100% identical then it's possible to create an sql view as:
CREATE VIEW ALL_CUSTOMERS
AS
SELECT 'public' as schema,* FROM public.customer
UNION ALL
SELECT 'appcompany1' as schema,* FROM appcompany1.customer
UNION ALL
SELECT 'appcompany2' as schema,* FROM appcompany2.customer
UNION ALL
....
SELECT `appcompanyZ',* FROM appcompanyZ.customer
This way: no need for dynamic query, no need to register all objects in all schemas.
You need only to register view with combined data and use one query
query{
query getCustomerList($schema: string) {
all_customer(where: {schema: {_eq: $schema}}){
customer_id
}
}
About both solutions: it's hard to call them elegant.
I myself dislike them both ;)
So decide yourself which is more suitable in your case.

* coming in place of keys when connecting saiku with mongo using apache drill

I am using Apache drill in embedded mode and when I am able to connect to mongo and query in drill successfully.
However when I create a schema in saiku schema designer using driver as "org.apache.drill.jdbc.Driver" and URL as "jdbc:drill:drillbit=hostname:31010" the connection is successful and all collections are also fetched and shown as tables in saiku, but in place of column names "*" is coming and actual column names are not coming.
Dont know what I am missing on.
I figured out the solution and posting in case anyone could benefit. I had created a view in drill with select * from table. When I created view as select col1,col2... from table the issue got resolved.

VSTS Get ID of Stored Queries

I am trying to execute a VSTS stored query using WorkItemTrackingHttpClient
The stored query is identified by it's ID and there are code samples to programmatically get this ID. However, I can't seem to figure out how I can get this ID within VSTS online view. Clicking on the query lists the workitems returned bye this query but the query ID isn't listed anywhere. Is this due to privileges of my authentication or I am overlooking something.
You can get the query ID in the URL:
Select a query
The URL format will be like: https://[xxx].visualstudio.com/[project]/_queries?id=effb4d62-1b9b-42e9-af7c-dbef725fca4a&_a=query.
The id is the value of id parameter.

How to access Library, File, and Field descriptions in DB2?

I would like to write a query that uses the IBM DB2 system tables (ex. SYSIBM) to pull a query that exports the following:
LIBRARY_NAME, LIBRARY_DESC, FILE_NAME, FILE_DESC, FIELD_NAME, FIELD_DESC
I can access the descriptions via the UI, but wanted to generate a dynamic query.
Thanks.
Along with SYSTABLES and SYSCOLUMNS, there is also a SYSSCHEMAS which appears to contain the data you need. Please note that accessing this information through QSYS2 will restrict rows returned to those objects with which you have some access - the SYSIBM schema appears to disregard this (check the reference - for V6R1 it's about page 1267).
You also shouldn't need to retrieve this with a dynamic query - static with host variables (if necessary) will work just fine.