Query variables in Dgraph filter - dgraph

I am trying to use a variables (which is a scalar) in a #filter(ge(...)) call, but I run into an error
Given the following query
{
ua(func: uid(0xfb7f7)) {
uid
start_ua {
sua as index
}
recorded_in {
actions #filter(ge(index, sua)){
index
}
}
}
}
I get the following error
{
"errors": [
{
"code": "ErrorInvalidRequest",
"message": "Some variables are defined but not used\nDefined:[sua]\nUsed:[]\n"
}
],
"data": null
}
Now if I remove the sua as ... and the #filter(...) from the query, all works fine.
My Dgraph version is v1.0.13.
I tried replacing #filter(ge(index, sua)) with #filter(ge(index, val(sua))) but I still run into an error:
{
"errors": [
{
"code": "ErrorInvalidRequest",
"message": ": No value found for value variable \"sua\""
}
],
"data": null
}
What am I doing wrong?

Here's what the Dgraph docs say about value variables (emphasis added): https://docs.dgraph.io/query-language/#value-variables
Value variables store scalar values. Value variables are a map from the UIDs
of the enclosing block to the corresponding values.
It therefore only makes sense to use the values from a value variable in a
context that matches the same UIDs - if used in a block matching different
UIDs the value variable is undefined.
The start_ua and recorded_in are different subgraphs, which means variables defined in one are undefined in the other within the same query block.
What you can do is use multiple query blocks. Variables can be accessed across blocks:
{
block1(func: uid(0xfb7f7)) {
uid
start_ua (first: 1) {
sua as index
}
}
block2(func: uid(0xfb7f7)) {
recorded_in {
actions #filter(ge(index, val(sua))) {
index
}
}
}
}
I also added (first: 1) to the start_ua predicate, so that at most 1 node is fetched and stored the sua variable. If your data is already structured that way, then that's not needed.
val(sua) gets the value of the variable sua.

Related

"variable invitationId of type Int! is used in position expecting Int_comparison_exp"

I was trying following query on hasura platform and it gave below error.
I got solution for the same. That's why I'm sharing.
variables:
{
"invaId": 791
}
Query:
mutation UpdateQuery($status:String!,$invaId:Int!) {
update_inva(_set: {status: $status},where:{id:$invaId}){
affected_rows
}
}
Output:
{
"errors": [
{
"extensions": {
"path": "$.selectionSet.update_inva.args.where.id",
"code": "validation-failed"
},
"message": "variable invaId of type Int! is used in position expecting Int_comparison_exp"
}
]
}
I faced this error after passing the variable of type Int!.
The problem is in query.
Look at the where clause
where:{id:$invitationId})
where clause expects the comparison type too for eg. _eq.
That was the exact thing which I was missing.
So, I updated the query as below, and things were running perfectly
Query:
mutation UpdateQuery($status:String!,$invaId:Int!) {
update_inva(_set: {status: $status},where:{id:{_eq:$invaId}}){
affected_rows
}
}

must have a selection of subfields. Did you mean \"createEvent { ... }\"?", [graphql] [duplicate]

Hi I am trying to learn GraphQL language. I have below snippet of code.
// Welcome to Launchpad!
// Log in to edit and save pads, run queries in GraphiQL on the right.
// Click "Download" above to get a zip with a standalone Node.js server.
// See docs and examples at https://github.com/apollographql/awesome-launchpad
// graphql-tools combines a schema string with resolvers.
import { makeExecutableSchema } from 'graphql-tools';
// Construct a schema, using GraphQL schema language
const typeDefs = `
type User {
name: String!
age: Int!
}
type Query {
me: User
}
`;
const user = { name: 'Williams', age: 26};
// Provide resolver functions for your schema fields
const resolvers = {
Query: {
me: (root, args, context) => {
return user;
},
},
};
// Required: Export the GraphQL.js schema object as "schema"
export const schema = makeExecutableSchema({
typeDefs,
resolvers,
});
// Optional: Export a function to get context from the request. It accepts two
// parameters - headers (lowercased http headers) and secrets (secrets defined
// in secrets section). It must return an object (or a promise resolving to it).
export function context(headers, secrets) {
return {
headers,
secrets,
};
};
// Optional: Export a root value to be passed during execution
// export const rootValue = {};
// Optional: Export a root function, that returns root to be passed
// during execution, accepting headers and secrets. It can return a
// promise. rootFunction takes precedence over rootValue.
// export function rootFunction(headers, secrets) {
// return {
// headers,
// secrets,
// };
// };
Request:
{
me
}
Response:
{
"errors": [
{
"message": "Field \"me\" of type \"User\" must have a selection of subfields. Did you mean \"me { ... }\"?",
"locations": [
{
"line": 4,
"column": 3
}
]
}
]
}
Does anyone know what I am doing wrong ? How to fix it ?
From the docs:
A GraphQL object type has a name and fields, but at some point those
fields have to resolve to some concrete data. That's where the scalar
types come in: they represent the leaves of the query.
GraphQL requires that you construct your queries in a way that only returns concrete data. Each field has to ultimately resolve to one or more scalars (or enums). That means you cannot just request a field that resolves to a type without also indicating which fields of that type you want to get back.
That's what the error message you received is telling you -- you requested a User type, but you didn't tell GraphQL at least one field to get back from that type.
To fix it, just change your request to include name like this:
{
me {
name
}
}
... or age. Or both. You cannot, however, request a specific type and expect GraphQL to provide all the fields for it -- you will always have to provide a selection (one or more) of fields for that type.

YANG - Modeling non-mandatory containers

Currently I am working with YANG as part of a (legacy) Python project.
I am somewhat stuck at the task of defining a schema, which shall then be used to verify data, organized as a Python dictionary.
If it is possible, I would "like" to keep the current structure, since a lot of the codebase is using this data.
An "unaltered" piece of data:
"namespace": { # Mandatory
"management": { # Optional
"interfaces": { # Mandatory
"m0": { # Optional
"leaf1": "..."
}
}
},
"benchmark": { # Optional
"interfaces": { # Mandatory
"b0": { # Optional
"leaf1": "...",
"leaf2": "..."
},
"b1": { # Optional
"leaf1": "...",
"leaf2": "..."
}
}
}
}
My problem is that everything marked as "optional" (in the example) would be modeled as a container but it seems that they cannot be defined as optional (i.e.: mandatory false;) according to RFC6020.
Therefore, I defined a model that is using lists. Meaning some nodes of the Python Dict (management, benchmark, m0, b0, b1) are now list elements and cannot be accessed in the current fashion, e.g.: data['namespace']['management']...
The modified example looks like this:
"namespace": [
{
"desc": "management",
"interfaces": [
{
"leaf1": "..."
}
]
},
{
"desc": "benchmark",
"interfaces": [
{
"leaf1": "...",
"leaf2": "..."
},
{
"leaf1": "...",
"leaf2": "..."
}
]
}
]
The describing (snippet from my current) YANG model:
list namespace {
description "Namespace definitions.";
key desc;
leaf desc { type string; }
uses leaf-definitions;
list interfaces {
key leaf1;
uses leaf-definitions;
}
}
The verification is successful and the conversion of the data (itself) is not a problem, but it is resulting in a big pile of broken code.
This leads to my question(s):
Am I correct - are containers in YANG always mandatory?
Is there maybe another way to model this scenario? (Without breaking "too much")
I am very thankful for your input, since I am rather new to YANG!
Am I correct - are containers in YANG always mandatory?
Quite the contrary. They are always optional, unless they contain mandatory nodes (mandatory leaf, a list or leaf-list with min-elements > 0, etc.). In other words, containers inherit this property from their descendants. This of course only applies to non-presence containers. A presence container with mandatory children does not inherit this property, since that would defeat its purpose (presence). You probably missed the definition of a mandatory node in RFC6020:
A mandatory node is one of:
o A leaf, choice, or anyxml node with a "mandatory" statement with
the value "true".
o A list or leaf-list node with a "min-elements" statement with a
value greater than zero.
o A container node without a "presence" statement, which has at
least one mandatory node as a child.
This should already be helpful for your second question.
Is there maybe another way to model this scenario? (Without breaking "too much")
Abuse presence containers. They are always optional. You could also probably avoid using the lists by introducing some mandatory children to a non-presence container. Based on your initial data:
module mandatory-optional-branch {
namespace "org:example:mandatory-optional-branch";
prefix "mob";
grouping leafs {
leaf leaf1 {type string;}
leaf leaf2 {type string;}
}
list namespace { // mandatory
config false;
min-elements 1;
max-elements 1;
container management { // optional by nature
presence "I have mandatory children, but am not mandatory. Yay for me.
Of course my presence should have some meaning.";
list interfaces { // mandatory
min-elements 1;
max-elements 1;
container m0 { // optional - no mandatory node children
leaf leaf1 {type string;}
}
}
}
container benchmark { // optional by nature
presence "Same as 'management' above.";
list interfaces { // mandatory
min-elements 1;
max-elements 1;
container b0 { // optional - no mandatory node children
uses leafs;
}
container b1 { // optional - no mandatory node children
uses leafs;
}
}
}
}
}

Cypher Http request with match parameters

I'd like to execute an http cypher query with parameters like :
{"statements":
[
{"statement":"MATCH path=(p:Person {props})-[*..100]->() RETURN [n in nodes(path)]",
"parameters":{"props":{"name":"Lucille"}}
}
]
}
However i get the following error Parameter maps cannot be used in MATCH patterns (use a literal map instead, eg. \"{id: {param}.id}\").
I have no idea how to use a literal map here.
Thanks for your help !
You can either have:
{
"statements": [{
"statement": "MATCH path=(p:Person { name: {name} })-[*..100]->() ...",
"parameters": { "name": "Lucille" }
}]
}
or MATCH path=(p:Person { name: props.{name} }) ... while keeping you initial parameters
The reason is given in this comment:
"Unlike properties in CREATE, MATCH requires the map to be a literal. This is because the property names must be known in advance, when the query is compiled, in order to efficiently plan its execution."
I think your query would become:
MATCH path=(p:Person {id: {props}.id })-[*..100]->()
RETURN [n in nodes(path)]

Can I parameterize labels and properties on CREATE or SET? (REST and transaction)

I have a query
1. CREATE (a:%1$s {props}), (b:%2$s {props2}), (b)-[:%3$s {relProps}]->(a)
2. MATCH (a:%1$s { value:{value} })-[:%2$s]->(b) WHERE (b:%3$s) SET (b {props})
I'm using underscore.string to allow string format but would love to just stick with parameters.
Is it possible to parameterize labels such as
{
"query": CREATE (a:{label} {props}),
"params": {
"label":"SomeLabel",
"props":{....}
}
}
and is it also possible to parameterize properties on a SET?
{
"query": "MATCH ..... SET (node {props})"
"params": {
"props":{
"prop1:":"Property Name",
....
}
}
}
Also is there a way to parameterize on a "MERGE"? it gives me 'Parameter maps cannot be used in MERGE patterns (use a literal map instead, eg. "{id: {param}.id}")'
EDIT: what about parameterizing where clause?
MATCH (:Identity%1$s {nodeId:{nodeId})-[r*2..3]-(node1)-[b:%2$s]->(node2) %4$s return *
I have %4$s there for me to put whatever clauses I need to. If I want to have it as
WHERE node1.nodeId= {someNodeId} SET b= {props}
is that possible??
Also when I'm doing a transaction SET node={props} does not seem to work. I tried
statements:[
{
"statement":"..... SET node={props}",
"parameters":{
"props": {
"description":"some description"
}
}
}
]
Any suggestions?? Thank you!
You cannot parameterize labels since the query plan might look different for a different label.
Parameterizing multiple properties using a map is possible, note the slight difference in the SET syntax:
{
"query": "MATCH ..... SET node = {props}"
"params": {
"props":{
"prop1:":"Property Name",
....
}
}
}
Not 100% about MERGE but I guess this should work:
{
"query": "MERGE (n:Label {identifier: {idValue}) ON CREATE SET n = {props}"
"params": {
"identifier": 123,
"props":{
"identifier": 123,
"prop1:":"Property Name",
....
}
}
}
I found out!
CREATE ... SET node = {props}
does the trick to set multiple properties with parameters
doc: http://docs.neo4j.org/chunked/snapshot/cypher-parameters.html