How to restrict Collection.find() to certain select patterns in Meteor - mongodb

I am experimenting with a simple chat app and Meteor 0.8.0
For a list of messages, where each message references a user through user_id, I want to display the username together with the message.
Is it possible to restrict the select patterns for a find()-call, so that e.g. Meteor.users.find({_id: msg.userId}) is allowed but not Meteor.users.find({})?
Unfortunately this is not covered by Collection.allow/.deny, where I think would be the natural place. If this would be possible, I could simply use Meteor.publish("usersWithName",function() {Meteor.users.find({},{fields:{username:1}}); without having to worry that the complete user list can be fetched on the client by an attacker.
Currently, I am using the smart-publish package to publish only the users referenced by messages, but I would prefer a simpler solution.

No, there is no way to restrict find queries from being run client-side, since the server is never contacted. It just runs the query against it's local collection. In the same way that an insert, update, or delete first happens client-side and then validates against the server (i.e. someone can remove a document on their client but the server will then reject it).
The best way to handle this is to only publish the documents you specifically need. As you mentioned, if you only publish the documents that the client should have then you are secure. Even if there was a way to force a restriction on the search client-side, it still does not really make sense to pass down more collections than you need.

Related

Safest approach: onChange vs https cloud functions

I am wondering, I have to allow a user to change only certain parts of a document and I came up with two different solutions:
A: Locking the document with firestore rules and modify parts I am interested in using a https function (checking that the request is coming from document owner)
B: allowing only the owner of the document to make changes (with firestore rules) and trigger a onChange cloud function to check if he/she changed only the things that they are allowed. If not reject the changes
I would like if there is any safer approach or both are valid in the same way. How easy is to trick a https function?
In many cases both approaches are valid and will depend on preference.
People coming from a more traditional client/server environment will generally prefer a Cloud Function. This also allows you to do certain things which are not possible in a rule, for example you can perform any action with full server credentials.
Rules are perhaps more idiomatic for Firebase, and may be cheaper and faster. That said, it is very important to craft any rules very carefully. You may refer to the documentation, which might be relevant in this case.
The suggestion here is to use rules to prevent unwanted changes, rather than allow the owner to make any changes and then undo them in the trigger. If you want to implement logic which can't be done in rules, you might consider allowing the owner to write the changes to some kind of staging area, either separate fields in the same document, or a new pending document, then performing the checks and moving the data to its proper location in their trigger.

How to delete items from a subset with REST API

I'm wondering what are the best ways to delete items from a subset in a restful way. I got users and series, each user has his own list of series (watching, completed, etc). For example, if we want to get a list from a user we can do it with: GET /users/:id_user/series
If we want to delete a serie from the list of that user (but we don't want to delete the serie itself), how should it be?
I thought about the possibility of using DELETE /users/:id_user/series/:id_serie, but I'm not sure if it's the correct way for this case (maybe PATCH?).
I got another case, we got series and reviews. We can get the reviews like this: GET /series/:serie_id/reviews. In the other case we didn't want to delete the serie itself when deleting from a user list of series, but in this case we want to delete the review because its existence depends on the serie. So I guess in this case DELETE /series/:serie_id/reviews/:review_id is correct.
Is this difference important in order to choose the rest operation to delete the object/item from the subset?
How would you do it on the web?
You'd follow a link to a form, with input controls. You might have a something like a dropdown if you wanted to delete one series at a time, or lots of check boxes if you wanted to support a bulk delete. The user would provide input, hit the submit button, and the browser would create an application/x-www-form-urlencoded document and send it to the server.
What method would be used? Normally POST, because we are intending an edit to some resource on the server.
What resource would we be editing? Well, in trutch, it could be anything -- the server would include that information in the form metadata, so the client can just do what it is told.
So where should the server tell it to submit the form? Again, it could be anywhere... but a useful approach is to think about what resource in the client's cache is being updated. Because if we send the request to that resource, we get intelligent cache invalidation "for free".
So on the web, we would expect to see:
POST /users/:id_user/series
Does it have to be POST? On the HTML web, maybe it does, because the ubiquitous client of the web is a browser, not an editor.
It is okay to use POST.
But a perfectly valid alternative would be to edit the local copy of /users/:id_user/series, and then send back to the server a complete copy of the new version (PUT) or a patch-document describing the edits (PATCH). Notice that with both of these choices, the target uri is still /user/:id_user/series, so we still get the cache invalidation magic.
Creating a new resource in your model just to have something to DELETE is probably the wrong idea.
There are cases where an edit, or a delete, will necessarily impact more than one resource.
There are some specific circumstances when you can get the right magic cache invalidation with two resources (for instance, delete one document, and send back an updated copy of another).
But we don't, today, have a general purpose cache invalidation mechanism. (Closest thing I've been able to find is this, which seems to have stalled out in 2012.

Meteor - Why not just publish all the collection data?

This may be quite an easy question to answer as it may just be my lack of understanding, but if you are having to run the query twice - once of the server and once on the client - why not just publish all the collection data, and then just run one query on the client?
Obviously I don't mean doing this for the users collection, but if you have a blog Posts collection, wouldn't this be beneficial?
Publish all the post data, then subscribe to it and running whatever query is necessary on the client to get the data you need.
Publishing everything is good for 'development' environment as meteor adds autopublish by default but this has some fallacies in 'production' environment. I find this two points to be of importance
Security : The idea is, supply only as much data to the client as required. You can never trust the client and you don't know what the client may use the data for. For your use case, of simple blog posts, this may not be a serious risk but may be a critical risk for e commerce application. The last thing, you want is a hacker to use the data and leverage a bug in your code to do nasty stuff.
Data Overheads: For subscriptions, generally waitOn is used. Thus, till all the data has been made available to the client, the templates are not rendered. IF you have a very large amount of data it will take considerable time to render. So, it is advised to keep the data at 'only what to need' stage to optimize this time too.

Presence Server working details

I am fairly new to the presence server thing. I have got the idea about how the presence server works, concept like presentity, watcher , PUBLISH, SUBSCRIBE, NOTIFY, SIP transactions.
I have to work on a project prototype where we have the Presence Server database exposed as SaaS using REST.
One thing I am not able to find out is, the presence data or the information about the publisher and subscriber is stored in the Database tables or in the XML files. Because as I read, everywhere they say about XCAP server which has the policy documents and this policy documents are applied on publisher and subscribers document which are also in xml. I am wondering what is in database then?
Q. So, is it like the information is stored in tables and then converted to xml?
Q. Can we have all the information in tables and can we let go of the XCAP server.
I am desperately looking for the answer.
Thanks
The following image can be used as a reference to define what is achieved by the XCAP server. It provides HTTP Access to clients to access rules and profiles corresponding to the user and preferences that is available in the DB. So it a direct interface to the DB and is needed if you are going to provide access over REST
Image courtesy - http://openxcap.org/

xmpp server and roster issue

I am working on the jabber chatting Applications with the use of XMPP server .
I want to make 2 user friend so I have to add roster with the use of mysql query.
I have make entry in two tables.(1) ofRoster (2)ofRosterGroups.
I make entry in both the table but its not working.
Is there anything where I am missing.
I can do this with the admin panel but i don't want to do that.
I think you are using openfire (those tables in SQL look like the openfire setup). If so, the table you have to edit is "ofGroupUser". To add a user to a group you need to do a sql insert into that table where the group name is the group you want to add the user to, the username is the user you are adding to the group and administrator is the flag of that user's authority (just use 0). An example insert would look like this:
INSERT INTO ofGroupUser VALUES("group name", "user", administrator);
However, as mentioned in the above post this is not a good method for doing this as it will not immediately affect the server. You must restart the server for these changes to take place because openfire (or whatever server you are using) probably only reads the database on start up. Once it caches everything, it will edit the database according to requests (like adding users or groups through the admin console), but will not read from it and your additions will not be seen until a server restart occurs.
Basically, doing manual sql inserts will produce the desired results, and, if you are just testing some functionality, will work just fine as long as you restart the server. If you are using openfire and need to do group administrative work in some way besides the web ui, I would look into using a different server. As far as I know, openfire isn't real great with administration outside of it's web ui. Here is a list of many open source xmpp servers. I'd recommend ejabberd (as mentioned above post) it has a very nice control tool called ejabberdctl with an available expansion module called mod_ctlextra (here is the man page for it which lists commands) that will allow you to do what I assume you are wanting. Then you don't have to worry about sql and restarting, just use their tool which is how it should be.
Also, on a side note, ejabberd is extremely efficient due to the nature of the language used to write it: Erlang. Great stuff.
Hope that helps!
Presumably you are using the odbc modules with ejabberd. The sql schema though defines two tables rostergroups and rosterusers, not the ones you mention in the question. In any case you should not update the tables directly, ejabberd keeps internal state and does not get notified of your changes.
The way to go is by actually having the users send the mutual subscriptions and accept them as per the rfc. Roster Item Exchange might also be useful.