Azure Log Analytics for Postgres Flexible Server - postgresql

Just trying to use a pre-existing "Slowest queries - top 5" from Azure log analytics for postgres flexible server. The query that is provided is:
// Slowest queries
// Identify top 5 slowest queries.
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.DBFORPOSTGRESQL"
| where Category == "QueryStoreRuntimeStatistics"
| where user_id_s != "10" //exclude azure system user
| summarize avg(todouble(mean_time_s)) by event_class_s , db_id_s ,query_id_s
| top 5 by avg_mean_time_s desc
This query results in the error :
'where' operator: Failed to resolve column or scalar expression named 'user_id_s'
If the issue persists, please open a support ticket. Request id: XXXX
I am guessing that something is not configured in order to utilize the user_id_s column. Any assistance is appreciated.

I am expecting you are checking the integer value 10 is not equal to the user_id_s.
In your KQL query user_id_s != "10" .
Thanks # venkateshdodda-msft I am adding your suggestion to help to fix the issue.
If you are using integer in a KQL make sure to remove the " " double quotes.
# using as a integer
| where user_id_s != 10
Or convert the integer into string by using
# converting into string
| extend user_id_s = tostring(Properties.user_id_s)
| where UserId in ('10')
Modified KQL Query
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.DBFORPOSTGRESQL"
| where Category == "QueryStoreRuntimeStatistics"
# using as a integer
| where user_id_s != 10 //exclude azure system user
| summarize avg(todouble(mean_time_s)) by event_class_s , db_id_s ,query_id_s
| top 5 by avg_mean_time_s desc
Reference:
Operator failed to resolve table or column expression
Converting integer to string

Related

How to use dynamic table name for sub query where the dynamic value coming from its own main query in PostgreSQL?

I have formed this query to get the desired output mentioned below:
select tbl.id, tbl.label, tbl.input_type, tbl.table_name
case when tbl.input_type = 'dropdown' or tbl.input_type = 'searchable-dropdown'
then (select json_agg(opt) from tbl.table_name) as opt) end as options
from mst_config as tbl;
I want output like below:
id | label | input_type | table_name | options
----+----------------------------------------------------+---------------------+-------------------------+-----------------------------------------------------------
1 | Gender | dropdown | mst_gender | [{"id":1,"label":"MALE"},
| | | | {"id":2,"label":"FEMALE"}]
2 | SS | dropdown | mst_ss | [{"id":1,"label":"something"},
| | | | {"id":2,"label_en":"something"}]
But, I'm facing a problem while using,
select json_agg(opt) from tbl.table_name) as opt
In the above part "tbl.table_name", I wanted to use it as dynamic table name but it's not working.
Then, I have searched a lot and found something like Execute format('select * from %s', table_name), where tablename is the dynamic table name. I have even tried the same with postgres function.
But I faced an issue again while using the format method. The reason is I want to use the variable for which the value needs to come from its own main query value instead of already having it in a variable. so this one was also not working.
I would really appreciate if anyone can help me out on this. Also if there are any other possibilities available to achieve this output, help me on that as well.

How to push data into a "JSON" data type column in Postgresql

I have the following POSTGRESQL table
id | name | email | weightsovertime | joined
20 | Le | le#gmail.com | [] | 2018-06-09 03:17:56.718
I would like to know how to push data (JSON object or just object) into the weightsovertime array.
And since I am making a back-end server, I would like to know the KnexJS query that does this.
I tried the following syntax but it does not work
update tableName set weightsovertime = array_append(weightsovertime,{"weight":55,"date":"2/3/96"}) where id = 20;
Thank you
For anyone who happens to land on this question, the solution using Knex.js is:
knex('table')
.where('id', id)
.update({
arrayColumn: knex.raw(`arrayColumn || ?::jsonb`, JSON.stringify(arrayToAppend))
})
This will produce a query like:
update tableName
set weightsovertime = arrayColumn || $1::json
where id = 20;
Where $1 will be replaced by the value of JSON.stringfy(arrayToAppend). Note that this conversion is obligatory because of a limitation of the Postegre drive
array_append is for native arrays - a JSON array inside a jsonb column is something different.
Assuming your weightsovertime is a jsonb (or json) you have to use the concatenation operator : ||
e.g:
update the_table
set weitghtsovertime = weightsovertime||'[{"weight": 55, "date": "1996-03-02"}]'
where id = 20;
Online example: http://rextester.com/XBA24609

Native Query (JPA) takes long with date comparison

Has anyone got any idea how I could optimize this query so that it'll run faster? Right now it takes up to 30sec to retrieve around 3k of "containers" and thats way to long.. It's forseen that it'll have to retrieve around 1miljon records.
Query query = em().createNativeQuery("SELECT * FROM CONTAINER where TO_CHAR(CREATION_DATE, 'YYYY-MM-DD') >= TO_CHAR(:from, 'YYYY-MM-DD') " +
"AND TO_CHAR(CREATION_DATE, 'YYYY-MM-DD') <= TO_CHAR(:to, 'YYYY-MM-DD') ", Container.class);
query.setParameter("from", from);
query.setParameter("to", to);
return query.getResultList();
JPA 2.0, Oracle DB
EDIT: I've got an index on the CREATION_DATE column:
CREATE INDEX IDX_CONTAINER_CREATION_DATE
ON CONTAINER (CREATION_DATE);
it's not a named query because the TO_CHAR function doesn't seem to be supported by JPA 2.0 and I've read that it should make the query faster if there's an index..
My explain plan (still doing full table scan for some reason instead of using the index):
---------------------------------------
| Id | Operation | Name |
---------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| CONTAINER |
---------------------------------------
One fix I don't like:
I've done the following..
TypedQuery<Container> query = em().createQuery(
"SELECT NEW Container(c.barcode, c.createdBy, c.creationDate, c.owner, c.sequence, c.containerSizeBarcode, c.a, c.b, c.c) " +
"FROM Container c where c.creationDate >= :from AND c.creationDate <= :to", Container.class);
and I've added an absurdly long constructor to Container and this fixes the loading times.. But, this is really ugly and I don't want this tbh. Anyone any other suggestions?

find pattern relationships using rest cypher

how can I find pattern relationships using rest cypher?
My query running on terminal :-
MATCH (n)<-[:DEPENDS_ON*]-(dependent) RETURN n.host as Host,
count(DISTINCT dependent) AS Dependents ORDER BY Dependents
DESC LIMIT 1**
output is :-
+--------------------+
| Host | Dependents |
+--------------------+
| "SAN" | 20 |
+--------------------+
where as equivalent query with rest :-
String query = "{\"query\" : \"MATCH (website)<-[rel]-(dependent) " +
"WHERE TYPE(rel) = {rtype} RETURN website.host as Host," +
"count(DISTINCT dependent) AS Dependents ORDER BY Dependents DESC LIMIT 1" +
" \", \"params\" : {\"rtype\" : \"DEPENDS_ON*\"}}";
and output is empty(no records) !!!
Any help appreciated.
P.S- When we dont use "*" in our query everything goes ok. IE both queries give same result
In the second query you are passing the relationship type as "DEPENDS_ON*" which is incorrect since the asterisk is being included.
The asterisk is for allowing arbitrary length paths for the specified relationship but is not part of the type.

Selection formula excluding rows with columns having null values

I have a strange issue. I have a report CR. In the Selection Formula I do a test on two fields. The test is simple like that : {field_City} = 'Paris' OR {field_Country} = 'France'.
This is a sample of the data in my table:
|---------------|---------------|---------------|
| ID_Record | Country | City |
|---------------|---------------|---------------|
| 1 | null | Paris |
|---------------|---------------|---------------|
| 2 | France | null |
|---------------|---------------|---------------|
| 3 | France | Paris |
|---------------|---------------|---------------|
The result of the Selection should be the 3 records, however it's excluding the 2 first rows where there is a null value in one of the columns. Then I changed the Selection Formula like this to consider null values too : ({field_City} = 'Paris' AND (isnull({field_Country}) OR not(isnull({field_Country})))) OR ({field_Country} = 'France' AND (isnull({field_City}) OR not(isnull({field_City})))) but I am still getting only the last record ! To ensure myself that my code is correct, I generated the sql query via the option in CR 'Show sql query', then i've added a WHERE clause in which I wrote the same condition that i've put in the Selection Formula, and...... it gave me the 3 records ! Unfortunately I can't work with the sql query, I have to find out why the formula is excluding the records that have a null value in one of the columns :( I hope that you can help me. Thanks a lot !
This is the solution: ((isnull({field_Country}) AND {field_City} = 'Paris') OR (isnull({field_City}) AND {field_Country} = 'France') OR (not(isnull({field_Country})) AND {field_City} = 'Paris') OR (not(isnull({field_City})) AND {field_Country} = 'France')) , Thank you so much Craig!
You need to test for null values first:
( Not(Isnull({field_Country})) AND {field_Country}='France' )
OR
( Isnull({field_Country}) AND {field_City}='Paris' )