How can I automatically apply model filters to GET requests in Sails - sails.js

I want all all the HTTP GET requests to the API generated by Sails to be restricted. So how can I apply a filter to all incoming API GET requests.
More specifically, most of my models have an attribute called publicityLevel. This tells whether a model is public or not. So I want all my models to automatically apply a filter (like publicityLevel: 'public') for all incoming GET requests.
Even more advanced, I'd like to write some code which decides whether the user can see a specific model or not. So if a user is an admin, don't apply this filter. If the user isn't an admin, apply this filter.

I had similar problem to solve with blueprints and I solved it.
If we are talking about BLUEPRINTS:
You can get modelName from req.options.model when you are using Blueprints.
I was using it to check if user belongs to the same group as element.
Unfortunately you can't use this[modelName] as option is giving you model name starting with small letter, so first you have to upper case first letter with e.g. var modelName = req.options.model.charAt(0).toUpperCase() + req.options.model.slice(1);
and then you are free to use this[modelName].whateverYouNeed
I used it for generic policy to let user editing only his own group elements.
var modelName = req.options.model.charAt(0).toUpperCase() + req.options.model.slice(1)
var elementID = null
if (req.params.id) { // To handle DELETE, PUT
elementID = req.params.id
}
if (req.body.id) { // To handle POST
elementID = req.body.id
}
this[modelName].findOne({
id: elementID
}).exec(function(err, contextElement) {
if(err) {
return res.serverError(err)
}
if(contextElement.group=== req.user.group.id) {
sails.log('accessing own: ' + modelName)
return next()
}
else {
return res.forbidden('Tried to access not owned object')
}
})

Related

In identity server 3 how to relate user to client

I am using Identity Server 3. I have couple applications ie. Client configured and have few users configured. How do i establish the relationship between User and a Client and also view all applications that the selected User has access to.
Update 1
I am sorry if question was confusing. On IdSvr3 home page, there is a link to revoke application permissions. I am guessing in order to revoke the permission you have to first establish the relationship between user and application.
and i wanted to know how to establish that permission when i add new user?
There's no direct way to limit one or multiple users to a certain client. This is where you should think about implementing your own custom validation. Fortunately, the IdentityServer provides an extensibility point for this kind of requirement.
ICustomRequestValidator
You should implement this interface to further validate users to see if they belong to certain clients and filter them out. You can look into the user details by looking at ValidatedAuthorizeRequest.Subject. This custom validator will start after validating optional parameters such as nonce, prompt, arc_values ( AuthenticationContextReference ), login_hint, and etc. The endpoint is AuthorizeEndPointController and the default implementation of the interface for the tailored job is AuthorizeRequestValidator and its RunValidationAsync. You should take a look at the controller and the class.
Implementation tip
By the time the custom request validation begins, a Client reference will be presented in ValidatedAuthorizeRequest. So all you need to do would be matching the client id or some other identifiers you think you need to verify the client. Probably, you might want to add a Claim key-value pair to your client which you want to allow a few users.
Maybe something like this.
new InMemoryUser{Subject = "870805", Username = "damon", Password = "damon",
Claims = new Claim[]
{
new Claim(Constants.ClaimTypes.Name, "Damon Jeong"),
new Claim(Constants.ClaimTypes.Email, "dmjeong#email.com"),
new Claim(Constants.ClaimTypes.EmailVerified, "true", ClaimValueTypes.Boolean)
}
}
Assume you have above user, then add the subject id to the claim of a client like below.
new Client
{
ClientName = "WPF WebView Client Sample",
ClientId = "wpf.webview.client",
Flow = Flows.Implicit,
.
.
.
// Add claim for limiting this client to certain users.
// Since a claim only accepts type and value as string,
// You can add a list of subject id by comma separated values
// eg ( new Claim("BelongsToThisUser", "870805, 870806, 870807") )
Claims = new List<Claim>
{
new Claim("BelongsToThisUser", "870805")
}
},
And then just implement the ICustomRequestValidator and try to match the Claim value with the given user in its ValidateAuthorizeRequestAsync.
public class UserRequestLimitor : ICustomRequestValidator
{
public Task<AuthorizeRequestValidationResult> ValidateAuthorizeRequestAsync(ValidatedAuthorizeRequest request)
{
var clientClaim = request.Client.claims.Where(x => x.Type == "BelongsToThisUser").FirstOrDefault();
// Check is this client has "BelongsToThisUser" claim.
if(clientClaim != null)
{
var subClaim = request.Subject.Claims.Where(x => x.Type == "sub").FirstOrDefault() ?? new Claim(string.Empty, string.Empty);
if(clientClaim.Value == userClaim.Value)
{
return Task.FromResult<AuthorizeRequestValidationResult>(new AuthorizeRequestValidationResult
{
IsError = false
});
}
else
{
return Task.FromResult<AuthorizeRequestValidationResult>(new AuthorizeRequestValidationResult
{
ErrorDescription = "This client doesn't have an authorization to request a token for this user.",
IsError = true
});
}
}
// This client has no access controls over users.
else
{
return Task.FromResult<AuthorizeRequestValidationResult>(new AuthorizeRequestValidationResult
{
IsError = false
});
}
}
public Task<TokenRequestValidationResult> ValidateTokenRequestAsync(ValidatedTokenRequest request)
{
// your implementation
}
}
Time to DI
You need to inject your own dependency when you configure up your IdentityServer. The authorization server uses IdentityServerServiceFactory for registering dependencies.
var factory = new IdentityServerServiceFactory();
factory.Register(new Registration<ICustomRequestValidator>(resolver => new UserRequestLimitor()));
Then Autofac; the IoC container in IdentityServer will do the rest of the DI jobs for you.

Azure Mobile Services Node.js update column field count during read query

I would like to update a column in a specific row in Azure Mobile Services using server side code (node.js).
The idea is that the column A (that stores a number) will increase its count by 1 (i++) everytime a user runs a read query from my mobile apps.
Please, how can I accomplish that from the read script in Azure Mobile Services.
Thanks in advance,
Check out the examples in the online reference. In the table Read script for the table you're tracking you will need to do something like this. It's not clear whether you're tracking in the same table the user is reading, or in a separate counts table, but the flow is the same.
Note that if you really want to track this you should log read requests to another table and tally them after the fact, or use an external analytics system (Google Analytics, Flurry, MixPanel, Azure Mobile Engagement, etc.). This way of updating a single count field in a record will not be accurate if multiple phones read from the table at the same time -- they will both read the same value x from the tracking table, increment it, and update the record with the same value x+1.
function read(query, user, request) {
var myTable = tables.getTable('counting');
myTable.where({
tableName: 'verses'
}).read({
success: updateCount
});
function updateCount(results) {
if (results.length > 0) {
// tracking record was found. update and continue normal execution.
var trackingRecord = results[0];
trackingRecord.count = trackingRecord.count + 1;
myTable.update(trackingRecord, { success: function () {
request.execute();
});
} else {
console.log('error updating count');
request.respond(500, 'unable to update read count');
}
}
};
Hope this helps.
Edit: fixed function signature and table names above, adding another example below
If you want to track which verses were read (if your app can request one at a time) you need to do the "counting" request and update after the "verses" request, because the script doesn't tell you up front which verse records the user requested.
function read(query, user, request) {
request.execute( { success: function(verseResults) {
request.respond();
if (verseResults.length === 1) {
var countTable = tables.getTable('counting');
countTable.where({
verseId: verseResults[0].id
}).read({
success: updateCount
});
function updateCount(results) {
if (results.length > 0) {
// tracking record was found. update and continue normal execution.
var trackingRecord = results[0];
trackingRecord.count = trackingRecord.count + 1;
countTable.update(trackingRecord);
} else {
console.log('error updating count');
}
}
}
});
};
Another note: make sure your counting table has an index on the column you're selecting by (tableName in the first example, verseId in the second).

Updating MongoDB in Meteor Router Filter Methods

I am currently trying to log user page views in meteor app by storing the userId, Meteor.Router.page() and timestamp when a user clicks on other pages.
//userlog.js
Meteor.methods({
createLog: function(page){
var timeStamp = Meteor.user().lastActionTimestamp;
//Set variable to store validation if user is logging in
var hasLoggedIn = false;
//Checks if lastActionTimestamp of user is more than an hour ago
if(moment(new Date().getTime()).diff(moment(timeStamp), 'hours') >= 1){
hasLoggedIn = true;
}
console.log("this ran");
var log = {
submitted: new Date().getTime(),
userId: Meteor.userId(),
page: page,
login: hasLoggedIn
}
var logId = Userlogs.insert(log);
Meteor.users.update(Meteor.userId(), {$set: {lastActionTimestamp: log.submitted}});
return logId;
}
});
//router.js This method runs on a filter on every page
'checkLoginStatus': function(page) {
if(Meteor.userId()){
//Logs the page that the user has switched to
Meteor.call('createLog', page);
return page;
}else if(Meteor.loggingIn()) {
return 'loading';
}else {
return 'loginPage';
}
}
However this does not work and it ends up with a recursive creation of userlogs. I believe that this is due to the fact that i did a Collection.find in a router filter method. Does anyone have a work around for this issue?
When you're updating Meteor.users and setting lastActionTimestamp, Meteor.user will be updated and send the invalidation signal to all reactive contexts which depend on it. If Meteor.user is used in a filter, then that filter and all consecutive ones, including checkLoginStatus will rerun, causing a loop.
Best practices that I've found:
Avoid using reactive data sources as much as possible within filters.
Use Meteor.userId() where possible instead of Meteor.user()._id because the former will not trigger an invalidation when an attribute of the user object changes.
Order your filters so that they run with the most frequently updated reactive data source first. For example, if you have a trackPage filter that requires a user, let it run after another filter called requireUser so that you are certain you have a user before you track. Otherwise if you'd track first, check user second then when Meteor.logginIn changes from false to true, you'd track the page again.
This is the main reason we switched to meteor-mini-pages instead of Meteor-Router because it handles reactive data sources much easier. A filter can redirect, and it can stop() the router from running, etc.
Lastly, cmather and others are working on a new router which is a merger of mini-pages and Meteor.Router. It will be called Iron Router and I recommend using it once it's out!

What's the best way to handle a REST API's 'create' response in Backbone.js

I'm using backbone.js to interact with a REST API that, when posting to it to create a new resource, responds with a status of 201, a 'Location' header pointing to the resource's URI, but an empty body.
When I create a new model at the moment, its successful, but the local representation of the model only contains the properties I explicitly set, not any of the properties that would be set on the server (created_date, etc.)
From what I understand, Backbone would update its representation of the model with data in the body, if there were any. But, since there isn't, it doesn't.
So, clearly, I need to use the location in the Location header to update the model, but what's the best way to do this.
My current mindset is that I would have to parse the url from the header, split out the id, set the id for the model, then tell the model to fetch().
This seems really messy. Is there a cleaner way to do it?
I have some influence over the API. Is the best solution to try to get the API author to return the new model as the body of the response (keeping the 201 and the location header as well)?
Thanks!
Sounds like you will have to do a little customization.
Perhaps override the parse method and url method of your model class inherited from
Backbone.Model.
The inherited functions are:
url : function() {
var base = getUrl(this.collection);
if (this.isNew()) return base;
return base + (base.charAt(base.length - 1) == '/' ? '' : '/') + this.id;
},
parse : function(resp) {
return resp;
},
and you could try something like:
parse: function(resp, xhr) {
this._url = xhr.getResponseHeader('location')
return resp
}
url: function() {
return this._url
}
Yes, backbone.js really wants the result of a save (be it PUT or POST) to be a parseable body which can be used to update the model. If, as you say, you have influence over the API, you should see if you can arrange for the content body to contain the resource attributes.
As you point out, its makes little sense to make a second over-the-wire call to fully materialize the model.
It may be that a status code of 200 is more appropriate. Purists may believe that a 201 status code implies only a location is returned and not the entity. Clearly, that doesn't make sense in this case.
With Backbone 0.9.9, I couldn't get the accepted answer to work. The signature of the parse function seems to have changed in an older version, and the xhr object is no longer available in the function signature.
This is an example of what I did, to make it work with Backbone v0.9.9 and jQuery 1.8.3 (using a Deferred Object/Promise), relying on the jqXHR object returned by Backbone.Model.save() :
window.CompanyView = Backbone.View.extend({
// ... omitted other functions...
// Invoked on a form submit
createCompany: function(event) {
event.preventDefault();
// Store a reference to the model for use in the promise
var model = this.model;
// Backbone.Model.save returns a jqXHR object
var xhr = model.save();
xhr.done(function(resp, status, xhr) {
if (!model.get("id") && status == "success" && xhr.status == 201) {
var location = xhr.getResponseHeader("location");
if (location) {
// The REST API sends back a Location header of format http://foo/rest/companys/id
// Split and obtain the last fragment
var fragments = location.split("/");
var id = fragments[fragments.length - 1];
// Set the id attribute of the Backbone model. This also updates the id property
model.set("id", id);
app.navigate('companys/' + model.id, {trigger: true});
}
}
});
}
});
I did not use the success callback that could be specified in the options hash provided to the Backbone.Model.save function, since that callback is invoked before the XHR response is received. That is, it is pointless to store a reference to the jqXHR object and use it in the success callback, since the jqXHR would not contain any response headers (yet) when the callback is invoked.
Another other to solve this would be to write a custom Backbone.sync implementation, but I didn't prefer this approach.

How to construct a REST API that takes an array of id's for the resources

I am building a REST API for my project. The API for getting a given user's INFO is:
api.com/users/[USER-ID]
I would like to also allow the client to pass in a list of user IDs. How can I construct the API so that it is RESTful and takes in a list of user ID's?
If you are passing all your parameters on the URL, then probably comma separated values would be the best choice. Then you would have an URL template like the following:
api.com/users?id=id1,id2,id3,id4,id5
api.com/users?id=id1,id2,id3,id4,id5
api.com/users?ids[]=id1&ids[]=id2&ids[]=id3&ids[]=id4&ids[]=id5
IMO, above calls does not looks RESTful, however these are quick and efficient workaround (y). But length of the URL is limited by webserver, eg tomcat.
RESTful attempt:
POST http://example.com/api/batchtask
[
{
method : "GET",
headers : [..],
url : "/users/id1"
},
{
method : "GET",
headers : [..],
url : "/users/id2"
}
]
Server will reply URI of newly created batchtask resource.
201 Created
Location: "http://example.com/api/batchtask/1254"
Now client can fetch batch response or task progress by polling
GET http://example.com/api/batchtask/1254
This is how others attempted to solve this issue:
Google Drive
Facebook
Microsoft
Subbu Allamaraju
I find another way of doing the same thing by using #PathParam. Here is the code sample.
#GET
#Path("data/xml/{Ids}")
#Produces("application/xml")
public Object getData(#PathParam("zrssIds") String Ids)
{
System.out.println("zrssIds = " + Ids);
//Here you need to use String tokenizer to make the array from the string.
}
Call the service by using following url.
http://localhost:8080/MyServices/resources/cm/data/xml/12,13,56,76
where
http://localhost:8080/[War File Name]/[Servlet Mapping]/[Class Path]/data/xml/12,13,56,76
As much as I prefer this approach:-
api.com/users?id=id1,id2,id3,id4,id5
The correct way is
api.com/users?ids[]=id1&ids[]=id2&ids[]=id3&ids[]=id4&ids[]=id5
or
api.com/users?ids=id1&ids=id2&ids=id3&ids=id4&ids=id5
This is how rack does it. This is how php does it. This is how node does it as well...
There seems to be a few ways to achieve this. I'd like to offer how I solve it:
GET /users/<id>[,id,...]
It does have limitation on the amount of ids that can be specified because of URI-length limits - which I find a good thing as to avoid abuse of the endpoint.
I prefer to use path parameters for IDs and keep querystring params dedicated to filters. It maintains RESTful-ness by ensuring the document responding at the URI can still be considered a resource and could still be cached (although there are some hoops to jump to cache it effectively).
I'm interested in comments in my hunt for the ideal solution to this form :)
You can build a Rest API or a restful project using ASP.NET MVC and return data as a JSON.
An example controller function would be:
public JsonpResult GetUsers(string userIds)
{
var values = JsonConvert.DeserializeObject<List<int>>(userIds);
var users = _userRepository.GetAllUsersByIds(userIds);
var collection = users.Select(user => new { id = user.Id, fullname = user.FirstName +" "+ user.LastName });
var result = new { users = collection };
return this.Jsonp(result);
}
public IQueryable<User> GetAllUsersByIds(List<int> ids)
{
return _db.Users.Where(c=> ids.Contains(c.Id));
}
Then you just call the GetUsers function via a regular AJAX function supplying the array of Ids(in this case I am using jQuery stringify to send the array as string and dematerialize it back in the controller but you can just send the array of ints and receive it as an array of int's in the controller). I've build an entire Restful API using ASP.NET MVC that returns the data as cross domain json and that can be used from any app. That of course if you can use ASP.NET MVC.
function GetUsers()
{
var link = '<%= ResolveUrl("~")%>users?callback=?';
var userIds = [];
$('#multiselect :selected').each(function (i, selected) {
userIds[i] = $(selected).val();
});
$.ajax({
url: link,
traditional: true,
data: { 'userIds': JSON.stringify(userIds) },
dataType: "jsonp",
jsonpCallback: "refreshUsers"
});
}