The model is stored in postgresql. Something like:
{
id: <serial>
data: <json> {
someIds: [<int>, ...]
}
}
How to add a rule jsonb_path_match(data::jsonb, 'exists($.someIds[*] ? (# == 3))') to the filter (where)?
In this case, the value '3' '(# == 3)' shall be determined by the user.
loopback-connector-postgresql does not support JSON/JSONB datatype yet. There is a pull request opened to contribute such feature, but it was never finished by the author - see #401.
As a workaround, you can execute a custom SQL query to perform jsonb_patch_match-based search of your data.
Instructions for LoopBack 3: https://loopback.io/doc/en/lb3/Executing-native-SQL.html
dataSource.connector.execute(sql_stmt, params, callback);
Instructions for LoopBack 4: https://loopback.io/doc/en/lb4/apidocs.repository.defaultcrudrepository.execute.html
const result = await modelRepository.execute(sql_stmt, params, options);
Related
I am trying to use Mongoose pre and post hooks in my MongoDB backend in order to compare the document in its pre and post-saved states, in order to trigger some other events depending on what's changed. So far however I'm having trouble getting the document via the Mongoose pre hook.
According to the docs "pre hooks work for both doc.save() and doc.update(). In both cases this refers to the document itself... ". So I here's what I tried. First in my model/schema I have the following code:
let Schema = mongoose
.Schema(CustomerSchema, {
timestamps: true
})
.pre("findOneAndUpdate", function(next) {
trigger.preSave(next);
})
// other hooks
}
... And then in my triggers file I have the following code:
exports.preSave = function(next) {
console.log("this: ", this);
}
};
But this is what logs to the console:
this: { preSave: [Function], postSave: [AsyncFunction] }
So clearly this didn't work. This didn't log out the document as I was hoping for. Why is this not the document itself here, as the docs themselves appear to indicate? And is there a way I can get a hold of the document with a pre hook? If not, is there another approach people have used to accomplish this?
You can't retrieve the document in the pre hook.
According to the documentation pre is a query middleware and this refers to the query and not the document being updated.
The confusion arises due to the difference in the this context within each of the kinds of middleware functions. During document pre or post middleware, you can use this to access the document model, but not in the other hooks.
There are three kinds of middleware functions, all of which have pre and post stages.
In document middleware functions, this refers to the document (model).
init, validate, save, remove
In query middleware functions, this refers to the query.
count,find,findOne,findOneAndRemove,findOneAndUpdate,update
In aggregate middleware, this refers to the aggregation object.
aggregate
It's explained here https://mongoosejs.com/docs/middleware.html#types-of-middleware
Therefore you can simply access the document during pre("init"), pre("init"), pre("validate"), post("validate"), pre("save"), post("save"), pre("remove"), post("remove"), but not in any of the others.
I've seen examples of people doing more queries within the other middleware hooks, to find the model again, but that sounds pretty dangerous to me.
The short answer seems to be, you need to change the original query to be document oriented, not query or aggregate style. It does seem like an odd limitation.
As per documentation you pre hook cannot get the document in function, but it can get the query as follow
schema.pre('findOneAndUpdate', async function() {
const docToUpdate = await this.model.findOne(this.getQuery());
console.log(docToUpdate); // The document that findOneAndUpdate() will modify
});
If you really want to access document (or id) in query middleware functions
UserSchema.pre<User>(/^(updateOne|save|findOneAndUpdate)/, async function (next) {
const user: any = this
if (!user.password) {
const userID = user._conditions?._id
const foundUser = await user.model.findById(userID)
...
}
If someone needs the function to hash password when user password changes
UserSchema.pre<User>(/^(updateOne|save|findOneAndUpdate)/, async function (next) {
const user: any = this
if (user.password) {
if (user.isModified('password')) {
user.password = await getHashedPassword(user.password)
}
return next()
}
const { password } = user.getUpdate()?.$set
if (password) {
user._update.password = await getHashedPassword(password)
}
next()
})
user.password exists when "save" is the trigger
user.getUpdate() will return props that changes in "update" triggers
I have requirement where in need to create the record from SAPui5 application,
For that we have Form and enterthe all details and submit to the data base.
Now i need to validate the first field value, if that value exist in the system/DB need to populate the error, like this record already exist during livechange.
For E.g., Input fields are as follows.
Empld : 121
EmpName : tom
On Change of Empid value need to check 121 record exist in the database or not.
Following are the blogs refereed for the solution but didn't get the solution for the same.
https://blogs.sap.com/2015/10/19/how-to-sapui5-user-input-validations/
https://blogs.sap.com/2015/11/01/generic-sapui5-form-validator/
As i"m new to SAPUI5.Please help me with the coding.
Thanks in advance.
I don't know how much you are aware of Requests to the Backend but maybe you could make a Read Operation and check if there is any data returned:
First solution could be like this (with Entity key):
this.getOwnerComponent().getModel().read("/EntityPath", {
success: function(oData, response) {
if(oData.results.length === 0) {
console.log("Nothing found for this key");
}
},
error: function(oError) {
//Error Handling here
}
});
Or you could build a Filter, pass it to the read operation and check if there is any data returned:
var aFilter = new sap.m.Filter(new Filter("EmpId", sap.m.FilterOperator.EQ, "value"));
this.getOwnerComponent().getModel().read("/EntitySet", {
filters: aFilter,
success: function(oData, response) {
if(oData.results.length === 0) {
console.log("User is not available");
}
},
error: function(oError) {
//Error Handling here
}
});
However, this isn't the best way to check if there is already an entry in your database. You should do this in your Business Logic with Error Messages which get passed to the Frontend.
Hope this helps :-)
I am currently building an API which uses the JSON patch specification to do partial updates to MongoDB using the Mongoose ORM.
I am using the node module mongoose-json-patch to apply patches to my documents like so:
var patchUpdate = function(req, res){
var patches = req.body;
var id = req.params.id;
User.findById(id, function(err, user){
if(err){ res.send(err);}
user.patch(patches, function(err){
if(err){ res.send(err);}
user.save(function(err){
if(err) {res.send(err);}
else {res.send("Update(s) successful" + user);}
});
});
});
};
My main issues occur when I am trying to remove or replace array elements with the JSON patch syntax:
var patches = [{"op":"replace", "path": "/interests/0", "value":"Working"}]
var user = {
name: "Chad",
interests: ["Walking", "Eating", "Driving"]
}
This should replace the first item in the array ("Walking") with the new value ("Working"), however I can't figure out how to validate what is actually being replaced. If another request removed /interests/0 prior to the patch being applied, "Eating" would be replaced by "Working" instead of "Walking", which would no longer exist in the array.
I would like to be sure that if the client thinks he is editing "Walking", then he will either successfully edit it, or at least get an error.
After running into the same issue like this myself i'll share my solution. The spec (described here) describes six operations, one of which is test. The source describes the test operation as
Tests that the specified value is set in the document. If the test fails, then the patch as a whole should not apply.
To ensure that you're changing the values that you're expecting you should validate the state of the data. You do this by preceeding your replace or remove operation with a test operation, where the value is equal to the expected data state. If the test fails, the following operations will not be executed.
With the test operation your patch data will look like this:
var patches = [
{"op":"test", "path": "/interests/0", "value": currentValue}, //where currentValue is the expected value
{"op":"replace", "path": "/interests/0", "value":"Working"}
]
I'm using the request library to make calls from one sails app to another one which exposes the default blueprint endpoints. It works fine when I query by non-id fields, but I need to run some queries by passing id arrays. The problem is that the moment you provide an id, only the first id is considered, effectively not allowing this kind of query.
Is there a way to get around this? I could switch over to another attribute if all else fails but I need to know if there is a proper way around this.
Here's how I'm querying:
var idArr = [];//array of ids
var queryParams = { id: idArr };
var options: {
//headers, method and url here
json: queryParams
};
request(options, function(err, response, body){
if (err) return next(err);
return next(null, body);
});
Thanks in advance.
Sails blueprint APIs allow you to use the same waterline query langauge that you would otherwise use in code.
You can directly pass the array of id's in the get call to receive the objects as follows
GET /city?where={"id":[1, 2]}
Refer here for more.
Have fun!
Alright, I switched to a hacky solution to get moving.
For all models that needed querying by id arrays, I added a secondary attribute to the model. Let's call it code. Then, in afterCreate(), I updated code and set it equal to the id. This incurs an additional database call, but it's fine since it's called just once - when the object is created.
Here's the code.
module.exports = {
attributes: {
code: {
type: 'string'//the secondary attribute
},
// other attributes
},
afterCreate: function (newObj, next) {
Model.update({ id: newObj.id }, { code: newObj.id }, next);
}
}
Note that newObj isn't a Model object as even I was led to believe. So we cannot simply update its code and call newObj.save().
After this, in the queries having id arrays, substituting id with code makes them work as expected!
I try to create/update nodes via the REST API with Cypher's MERGE-statement. Each node has attributes of ca. 1kb (sum of all sizes). I create/update 1 node per request. (I know there are other ways to create lots of nodes in a batch, but this is not the question here.)
I use Neo4j community 2.1.6 on a Windows Server 2008 R2 Enterprise (24 CPUs, 64GB) and the database directory resides on a SAN drive. I get a rate of 4 - 6 nodes per second. Or in other words, a single create or update takes around 200ms. This seems rather slow for me.
The query looks like this:
MERGE (a:TYP1 { name: {name}, version: {version} })
SET
a.ATTR1={param1},
a.ATTR2={param2},
a.ATTR3={param3},
a.ATTR4={param4},
a.ATTR5={param5}
return id(a)
There is an index on name, version and two of the attributes.
Why does it take so long? And what can I try to improve the situation?
I could imagine that one problem is that every request must create a new connection? Is there a way to keep the http connection open for multiple requests?
For a query I'm pretty sure you can only use one index per query per label, so depending on your data they index usage might not be efficient.
As far as a persistent connection, that is possible, though I think it would depend on the library you're using to connect to the REST API. In the ruby neo4j gem we use the Faraday gem which has a NetHttpPersistent adapter.
The index is only used when you use ONE attribute with MERGE
If you need to merge on both, create a compound property, index it (or better use a constraint) and merge on that compound property
Use ON CREATE SET otherwise you (over-)write the attributes everytime, even if you didn't actually create the node.
Adapted Statement
MERGE (a:TYP1 { name_version: {name_version} })
ON CREATE SET
a.version = {version}
a.name = {name}
a.ATTR1={param1},
a.ATTR2={param2},
a.ATTR3={param3},
a.ATTR4={param4},
a.ATTR5={param5}
return id(a)
This is an example of how you can execute a batch of cypher queries from nodejs in one communication with the Neo4j.
To run it,
get nodejs installed (if you don't have it already)
get a token from https://developers.facebook.com/tools/explorer giving you access to user_groups
run it as > node {yourFileName}.js {yourToken}
prerequisites:
var request=require("request") ;
var graph = require('fbgraph');
graph.setAccessToken(process.argv[2]);
function now() {
instant = new Date();
return instant.getHours()
+':'+ instant.getMinutes()
+':'+ instant.getSeconds()
+'.'+ instant.getMilliseconds();
}
Get facebook data:
graph.get('me?fields=groups,friends', function(err,res) {
if (err) {
console.log(err);
throw now() +' Could not get groups from faceBook';
}
Create cypher statements
var batchCypher = [];
res.groups.data.forEach(function(group) {
var singleCypher = {
"statement" : "CREATE (n:group{group}) RETURN n, id(n)",
"parameters" : { "group" : group }
}
batchCypher.push(singleCypher);
Run them one by one
var fromNow = now();
request.post({
uri:"http://localhost:7474/db/data/transaction/commit",
json:{statements:singleCypher}
}, function(err,res) {
if (err) {
console.log('Could not commit '+ group.name);
throw err;
}
console.log('Used '+ fromNow +' - '+ now() +' to commit '+ group.name);
res.body.results.forEach(function(cypherRes) {
console.log(cypherRes.data[0].row);
});
})
});
Run them in batch
var fromNow = now();
request.post({
uri:"http://localhost:7474/db/data/transaction/commit",
json:{statements:batchCypher}
}, function(err,res) {
if (err) {
console.log('Could not commit the batch');
throw err;
}
console.log('Used '+ fromNow +' - '+ now() +' to commit the batch');
})
});
The log shows that a transaction for 5 groups is significantly slower than a transactions for 1 group but significantly faster than 5 transactions for 1 group each.
Used 20:38:16.19 - 20:38:16.77 to commit Voiture occasion Belgique
Used 20:38:16.29 - 20:38:16.82 to commit Marches & Randonnées
Used 20:38:16.31 - 20:38:16.86 to commit Vlazarus
Used 20:38:16.34 - 20:38:16.87 to commit Wijk voor de fiets
Used 20:38:16.33 - 20:38:16.91 to commit Niet de bestemming maar de route maakt de tocht goed.
Used 20:38:16.35 - 20:38:16.150 to commit the batch
I just read your comment, Andreas, do it is not applicable for you, but you might use it to find out if the time is spent in the communication or in the updates