find pattern relationships using rest cypher - rest

how can I find pattern relationships using rest cypher?
My query running on terminal :-
MATCH (n)<-[:DEPENDS_ON*]-(dependent) RETURN n.host as Host,
count(DISTINCT dependent) AS Dependents ORDER BY Dependents
DESC LIMIT 1**
output is :-
+--------------------+
| Host | Dependents |
+--------------------+
| "SAN" | 20 |
+--------------------+
where as equivalent query with rest :-
String query = "{\"query\" : \"MATCH (website)<-[rel]-(dependent) " +
"WHERE TYPE(rel) = {rtype} RETURN website.host as Host," +
"count(DISTINCT dependent) AS Dependents ORDER BY Dependents DESC LIMIT 1" +
" \", \"params\" : {\"rtype\" : \"DEPENDS_ON*\"}}";
and output is empty(no records) !!!
Any help appreciated.
P.S- When we dont use "*" in our query everything goes ok. IE both queries give same result

In the second query you are passing the relationship type as "DEPENDS_ON*" which is incorrect since the asterisk is being included.
The asterisk is for allowing arbitrary length paths for the specified relationship but is not part of the type.

Related

Querying jsonb field with #> through Postgrex adapter

I'm trying to query jsonb field via Postgrex adapter, however I receive errors I cannot understand.
Notification schema
def all_for(user_id, external_id) do
from(n in __MODULE__,
where: n.to == ^user_id and fragment("? #> '{\"external_id\": ?}'", n.data, ^external_id)
)
|> order_by(desc: :id)
end
it generates the following sql
SELECT n0."id", n0."data", n0."to", n0."inserted_at", n0."updated_at" FROM "notifications"
AS n0 WHERE ((n0."to" = $1) AND n0."data" #> '{"external_id": $2}') ORDER BY n0."id" DESC
and then I receive the following error
↳ :erl_eval.do_apply/6, at: erl_eval.erl:680
** (Postgrex.Error) ERROR 22P02 (invalid_text_representation) invalid input syntax for type json. If you are trying to query a JSON field, the parameter may need to be interpolated. Instead of
p.json["field"] != "value"
do
p.json["field"] != ^"value"
query: SELECT n0."id", n0."data", n0."to", n0."inserted_at", n0."updated_at" FROM "notifications" AS n0 WHERE ((n0."to" = $1) AND n0."data" #> '{"external_id": $2}') ORDER BY n0."id" DESC
Token "$" is invalid.
(ecto_sql 3.9.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
(ecto_sql 3.9.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
(ecto 3.9.2) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
(ecto 3.9.2) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
however if I just copypaste generated sql to psql console and run it, it will succeed.
SELECT n0."id", n0."data", n0."to", n0."inserted_at", n0."updated_at" FROM "notifications" AS n0 WHERE ((n0."to" = 233) AND n0."data" #> '{"external_id": 11}') ORDER BY n0."id" DESC
notifications-# ;
id | data | to | inserted_at | updated_at
----+---------------------+-----+---------------------+---------------------
90 | {"external_id": 11} | 233 | 2022-12-15 14:07:44 | 2022-12-15 14:07:44
(1 row)
data is jsonb column
Column | Type | Collation | Nullable | Default
-------------+--------------------------------+-----------+----------+-------------------------------------------
data | jsonb | | | '{}'::jsonb
What am I missing in my elixir notification query code?
Searching for solution I came across only using raw sql statement, as I couldn't figure out what's wrong with my query when it gets passed through Postgrex
so as a solution I found the following:
def all_for(user_id, external_ids) do
{:ok, result} =
Ecto.Adapters.SQL.query(
Notifications.Repo,
search_by_external_id_query(user_id, external_ids)
)
Enum.map(result.rows, &Map.new(Enum.zip(result.columns, &1)))
end
defp search_by_external_id_query(user_id, external_id) do
"""
SELECT * FROM "notifications" AS n0 WHERE ((n0."to" = #{user_id})
AND n0.data #> '{\"external_id\": #{external_id}}')
ORDER BY n0."id" DESC
"""
end
But as a result I'm receiving Array with Maps inside not with Ecto.Schema as if I've been using Ecto.Query through Postgrex, so be aware.

Azure Log Analytics for Postgres Flexible Server

Just trying to use a pre-existing "Slowest queries - top 5" from Azure log analytics for postgres flexible server. The query that is provided is:
// Slowest queries
// Identify top 5 slowest queries.
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.DBFORPOSTGRESQL"
| where Category == "QueryStoreRuntimeStatistics"
| where user_id_s != "10" //exclude azure system user
| summarize avg(todouble(mean_time_s)) by event_class_s , db_id_s ,query_id_s
| top 5 by avg_mean_time_s desc
This query results in the error :
'where' operator: Failed to resolve column or scalar expression named 'user_id_s'
If the issue persists, please open a support ticket. Request id: XXXX
I am guessing that something is not configured in order to utilize the user_id_s column. Any assistance is appreciated.
I am expecting you are checking the integer value 10 is not equal to the user_id_s.
In your KQL query user_id_s != "10" .
Thanks # venkateshdodda-msft I am adding your suggestion to help to fix the issue.
If you are using integer in a KQL make sure to remove the " " double quotes.
# using as a integer
| where user_id_s != 10
Or convert the integer into string by using
# converting into string
| extend user_id_s = tostring(Properties.user_id_s)
| where UserId in ('10')
Modified KQL Query
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.DBFORPOSTGRESQL"
| where Category == "QueryStoreRuntimeStatistics"
# using as a integer
| where user_id_s != 10 //exclude azure system user
| summarize avg(todouble(mean_time_s)) by event_class_s , db_id_s ,query_id_s
| top 5 by avg_mean_time_s desc
Reference:
Operator failed to resolve table or column expression
Converting integer to string

how create left join query with sails.js

I would like do a left join query in sails.js. I think i should use populate
I have three models
caracteristique{
id,
name,
races:{
collection: 'race',
via: 'idcaracteristique',
through: 'racecaracteristique'
},
}
race{
id,
name,
caracteristiques:{
collection: 'caracteristique',
via: 'idrace',
through: 'racecaracteristique'
}
}
RaceCarecteristique{
idrace: {
model:'race'
},
idcaracteristique: {
model: 'caracteristique'
},
bonusracial:{
type: 'number',
}
My data are:
Table Caracteristiques
id name
1 | strength
2 | dex
3 | Charisme
Table Race
id name
1 | human
2 | Org
TableRaceCarecteristique
idrace idcaracteristique bonusracial
1 | 2 | +2
This sql request give me for human, all caracteristiques and if exist bonusracial
'SELECT caracteristique.id, caracteristique.name, bonusracial
FROM caracteristique
LEFT OUTER JOIN (select idcaracteristique, bonusracial
from racecaracteristique
where idrace=$1 ) as q
ON q.idcaracteristique = caracteristique.id';
I have this result:
caracteristique.id, caracteristique.name, bonusracial
1 | strength | null
2 | dex | 2
3 | Charisme | null
How use populate to do this ?
When using a SQL-database adapter (MySQL, PQSL etc) you can utilise a method for performing actual, handwritten SQL statements. When all else fails, this might be your best bet to find an acceptable solution, within the framework.
The .sendNativeQuery() method sends your parameterized SQL statement to the native driver, and responds with a raw, non-ORM-mangled result. Actual database-schema specific tables and columns appear in the result, so you need to be careful with changes to models etc. as they might change the schema in the backend database.
The method takes two parameters, the parameterized query, and the array of values to be inserted. The array is optional and can be omitted if you have no parameters to replace in the SQL statement.
Using your already parameterized query from above, I'm sending the query to fetch the data for an "org" (orc perhaps?) in the example below. See the docs linked at the bottom.
Code time:
let query = `
SELECT caracteristique.id, caracteristique.name, bonusracial
FROM caracteristique
LEFT OUTER JOIN (select idcaracteristique, bonusracial
from racecaracteristique
where idrace=$1 ) as q
ON q.idcaracteristique = caracteristique.id`;
var rawResult = await sails.sendNativeQuery(query, [ 2 ]);
console.log(rawResult);
Docs: .sendNativeQuery()

Native Query (JPA) takes long with date comparison

Has anyone got any idea how I could optimize this query so that it'll run faster? Right now it takes up to 30sec to retrieve around 3k of "containers" and thats way to long.. It's forseen that it'll have to retrieve around 1miljon records.
Query query = em().createNativeQuery("SELECT * FROM CONTAINER where TO_CHAR(CREATION_DATE, 'YYYY-MM-DD') >= TO_CHAR(:from, 'YYYY-MM-DD') " +
"AND TO_CHAR(CREATION_DATE, 'YYYY-MM-DD') <= TO_CHAR(:to, 'YYYY-MM-DD') ", Container.class);
query.setParameter("from", from);
query.setParameter("to", to);
return query.getResultList();
JPA 2.0, Oracle DB
EDIT: I've got an index on the CREATION_DATE column:
CREATE INDEX IDX_CONTAINER_CREATION_DATE
ON CONTAINER (CREATION_DATE);
it's not a named query because the TO_CHAR function doesn't seem to be supported by JPA 2.0 and I've read that it should make the query faster if there's an index..
My explain plan (still doing full table scan for some reason instead of using the index):
---------------------------------------
| Id | Operation | Name |
---------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| CONTAINER |
---------------------------------------
One fix I don't like:
I've done the following..
TypedQuery<Container> query = em().createQuery(
"SELECT NEW Container(c.barcode, c.createdBy, c.creationDate, c.owner, c.sequence, c.containerSizeBarcode, c.a, c.b, c.c) " +
"FROM Container c where c.creationDate >= :from AND c.creationDate <= :to", Container.class);
and I've added an absurdly long constructor to Container and this fixes the loading times.. But, this is really ugly and I don't want this tbh. Anyone any other suggestions?

Using wildcards in column value in LIKE condition

I have a situation where I need to find an entity matching a given filename. The filename is in this form:
filename1 = "ABCD_126518.pdf";
filename2 = "XYZ_32162.pdf";
In the Oracle DB, I have entities with filename_patterns like the following:
ID | filename_pattern
1 | ABCD_
2 | KLM
3 | XYZ_
I need to find the pattern ID that the given filename matches to. In the given example it should be ID = 1 for filename1 and ID = 3 for filename2. How should the query look like in Java for the named query?
I need something like
SELECT p FROM FilenamePattern p WHERE p.filename_pattern || "%" LIKE :param;
We use Oracle DB and JPA 1.0.
How about,
SELECT p FROM FilenamePattern p WHERE :param LIKE CONCAT(p.filename_pattern, "%")