How to delete all nodes with a given type? - dgraph

https://dgraph.io/tour/schema/8/
shows some options for deleting
Delete a single triple
Delete all triples for a given edge
Delete all triples for a given node
Now i'd like to delete all triple for nodes of a given type. I assume this is done by some kind of combination of a query that selects the nodes for the the given type and then a mutation for each of these nodes. I couldn't find an example for this in the tutorial.
Let's assume I'd like to delete all triples for nodes of the type Country.
I know how to select the uids for the nodes:
{
query(func: has(<dgraph.type>)) #filter(eq(<dgraph.type>, "Country")) {
uid
}
}
But how do i combine this with a mutation?
https://discuss.dgraph.io/t/how-to-bulk-delete-nodes-or-cascade-delete-nodes/7308
seems to ask for an "upsert"
How could the deletion of all triples for nodes with a given type be achieved?

the following upsert seems to work:
upsert {
query {
# get the uids of all Country nodes
countries as var (func: has(<dgraph.type>)) #filter(eq(<dgraph.type>, "Country")) {
uid
}
}
mutation {
delete {
uid(countries) * * .
}
}
}

Related

DynamoDB - How to upsert nested objects with updateItem

Hi I am newbie to dynamoDB. Below is the schema of the dynamo table
{
"user_id":1, // partition key
"dob":"1991-09-12", // sort key
"movies_watched":{
"1":{
"movie_name":"twilight",
"movie_released_year":"1990",
"movie_genre":"action"
},
"2":{
"movie_name":"harry potter",
"movie_released_year":"1996",
"movie_genre":"action"
},
"3":{
"movie_name":"lalaland",
"movie_released_year":"1998",
"movie_genre":"action"
},
"4":{
"movie_name":"serendipity",
"movie_released_year":"1999",
"movie_genre":"action"
}
}
..... 6 more attributes
}
I want to insert a new item if the item(that user id with dob) did not exist, otherwise add the movies to existing movies_watched map by checking if the movie is not already available the movies_watched map .
Currently, I am trying to use update(params) method.
Below is my approach:
function getInsertQuery (item) {
const exp = {
UpdateExpression: 'set',
ExpressionAttributeNames: {},
ExpressionAttributeValues: {}
}
Object.entries(item).forEach(([key, item]) => {
if (key !== 'user_id' && key !== 'dob' && key !== 'movies_watched') {
exp.UpdateExpression += ` #${key} = :${key},`
exp.ExpressionAttributeNames[`#${key}`] = key
exp.ExpressionAttributeValues[`:${key}`] = item
}
})
let i = 0
Object.entries(item. movies_watched).forEach(([key, item]) => {
exp.UpdateExpression += ` movies_watched.#uniqueID${i} = :uniqueID${i},`
exp.ExpressionAttributeNames[`#uniqueID${i}`] = key
exp.ExpressionAttributeValues[`:uniqueID${i}`] = item
i++
})
exp.UpdateExpression = exp.UpdateExpression.slice(0, -1)
return exp
}
The above method just creates update expression with expression names and values for all top level attributes as well as nested attributes (with document path).
It works well if the item is already available by updating movies_watched map. But throws exception if the item is not available and while inserting. Below is exception:
The document path provided in the update expression is invalid for update
However, I am still not sure how to check for duplicate movies in movies_watched map
Could someone guide me in right direction, any help is highly appreciated!
Thanks in advance
There is no way to do this, given your model, without reading an item from DDB before an update (at that point the process is trivial). If you don't want to impose this additional read capacity on your table for update, then you would need to re-design your data model:
You can change movies_watched to be a Set and hold references to movies. Caveat is that Set can contain only Numbers or Strings, thus you would have movie id or name or keep the data but as JSON Strings in your Set and then parse it back into JSON on read. With SET you can perform ADD operation on the movies_watched attribute. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html#Expressions.UpdateExpressions.ADD
You can go with single table design approach and have these movies watched as separate items with (PK:userId and SK:movie_id). To get a user you would perform a query and specify only PK=userId -> you will get a collection where one item is your user record and others are movies_watched. If you are new to DynamoDB and are learning the ropes, then I would suggest go with this approach. https://www.alexdebrie.com/posts/dynamodb-single-table/

MongoDB Complex Query with Java

We have following structure in MongoDB documents.
{
"id":"1111",
"keys":[
{
"name":"Country",
"value":"USA"
},
{
"name":"City",
"value":"LongIsland"
},
{
"name":"State",
"value":"NewYork"
}
]
}
Now using Springframework Query object, I figured out a way to pull the details using below syntax
query.addCriteria(
Criteria.where("keys.value").is(countryparam).
andOperator(
Criteria.where("keys.value").is(stateparam)
)
);
Two issue with this query model.
First issue is it is irrelevant if countryparam and stateparam are actually meant to match Country key name and State key name respectively. If just the values matches, the query returns the document. Means, if I have Country and City params, this just works if user passes Country and City values, even if they are swapped. So how can I exactly compare City to cityparam and State to Stateparam?
More complexity is if I have to extract the document basing on multiple key value pairs, I should be correspondingly able to match key name with respective value and query the document. How can I do this?
Thanks in advance!

Auto generated Incrementing field for prisma

I have created an entity called Order in my datamodel.prisma file. there it should have it's automatically generating field called orderRef. which should have an automatically generated incremental value for createOrder field of the Order entity for each mutation call.
for the first Order the value of the 'orderRef' field should be OD1, the second Order should have the value OD2 for the field 'orderRef' and so on.
eg:
(OD1, OD2, ....... OD124, ..... )
What is the easiest way to achieve this?
yes the value should be String, instead of Number.
Currently, you cannot have auto-generated incrementing fields in Prisma. However, there is an RFC about Field Behavior that would allow this kind of feature in the future.
Currently, There are 3 alternatives:
1/ When creating your node, make a query to retrieve the last node of the same type, and increment the last value.
query {
things(orderBy: createdAt_desc, first: 1) {
myId
}
}
...
newId = myId + 1
...
mutation {
createThing(data: {myId: newId, ... }) {
...
}
}
2/ When creating your node, make an aggregation query to retrieve the count of all nodes of the same type, and increment based on the count. (However, if you delete previous nodes, you may find yourself having the same value multiple time.)
query {
thingsConnection {
aggregate {
count
}
}
}
...
newId = count + 1
...
mutation {
createThing(data: {myId: newId, ... }) {
...
}
}
3/ If what you need is a human-readable id for your user, consider creating a random 6 chars long string or using a library. (This would remove the need for an extra query, but randomness can have surprising behaviors)

Add a record to an Algolia Index only if it does not exists

I was wondering how can i POST in a single request (without fetching results for the given attribute) a pretty simple record to an Algolia Index without creating repeated instances.
e.g:
category: {
name: String // This should be unique
}
There isn't such "addObject if not exists" feature based on the record content but if you use the category name as the objectID of your record; the second time you'll add the object, it will just replace the previous instance.
{
objectID: "mycategoryname",
moreattributes: "if needed",
[...]
}
Would that work?

Use of MERGE to insert patterns of nodes and edges

I'm trying to insert patterns (nodes and edges) using merge. Using the demo movies graph, I'm sending the following cypher query: the movie exists, I'd like to create the User node and the edge in one query.
MERGE (top:Movie { title:'Top Gun' })<-[:viewed]-(user:User {Name:'Pierre'})
ON CREATE SET user.created = timestamp()
ON MATCH SET user.lastSeen = timestamp()
RETURN user,top;
"MERGE needs at least some part of the pattern to already be known. Please provide values for one of: user, top"
Actually, top exits, I can't figure out what's wrong in my query. Thanks for your help.
Pierre
Would this work?
MATCH (top:Movie { title:'Top Gun' })
MERGE (top)<-[:viewed]-(user:User {Name:'Pierre'})
ON CREATE SET user.created = timestamp()
ON MATCH SET user.lastSeen = timestamp()
RETURN user,top;
or this for creates:
MERGE (top:Movie { title:'Top Gun' })
MERGE (user:User {Name:'Pierre'})
ON CREATE SET user.created = timestamp()
ON MATCH SET user.lastSeen = timestamp()
MERGE (top)<-[:viewed]-(user)
RETURN user,top;