REST API MongoDB Authentication - mongodb

I am thinking in using MongoDB as my main database. However, my app is
fully in JavaScript and I wanted to use the REST API, client side.
I still can't understand what security mechanisms can I use in order to
make a JS call to the database without revealing all the data to all the
users.
Please advice on this matter.
Regards,
Donald

First of all, you can enable database auth which will make the REST interface require authentication if connected to from a remote machine.
That said, it's a very bad idea to expose your database like you suggest. Build a persistence abstraction layer in a server technology you're comfortable with (node.js for example) and put all security constraints and authentication there. The advantages are numerous :
You can keep your API stable even if the MongoDB one changes. You can even replace it with another persistence solution if the need arises in most cases.
You can limit the load a single client can put on your database. If you expose the database directly there's very little you can do to avoid people doing expensive queries or even potentially corrupting writes.
You can often do smart app-side caching and optimization that is not possible if every client directly accesses the database (this depends a bit on the app in question though).

Check out Sleepy.Mongoose, it's a REST API interface for MongoDB. I haven't tried it, but it appears to support standard MongoDB authentication.

MongoLab has MongoDB database hosting with a REST API that can be accessed client side, they even through in some jQuery based examples in their support documentation. That said, Remon is right that you sacrifice any security by doing so because you're making your API key public.

RESTHeart is a Web API for MongoDB.
It provides application level authorization and authentication.
Check the security documentation section.
Also some example applications are available on github:
blog example (using AngularJs via $htpp service)
notes example (using AngularJs via Restangular service)

Related

JWT authentication for jBASE RESTful API

We are in the process of designing a front-end application with Angular which will call a jBASE server through RESTful APIs. APIs are created from jBASE component called jAgent.
Does jAgent support creating and verifying JWTs?
If not, what is the best way to handle authentication/authorization for the Angular application?
If we need to use JWTs, do we have to use a authentication middleware application (.NET Core or node.js) for that?
Great question! At the moment there is no handler within jAgent and our recommendation is to implement this, and advanced web server/API gateway technology by way of other applications like HAproxy or Kong.
An expansion of jAgent functionality to include things like this is something we're still considering but keep in mind, the power of jBASE lies in its native interactions with the host OS. Since there is no virtual OS layer it can be easier to plug and play off the shelf things to fill in for additional functionality, which gives you the flexibility to bring your own tooling.
In summary:
Not at the moment
Using an off the shelf package to act as your API gateway
Subject to the package you choose
That relegates jAgent to management of the API layer as it exists on the PICK/jBASE side while the off the shelf package manages your API security layer.
One other note for you--I noticed that you included a link to the old jBASE docs hosted on HelpJuice. It's worth mentioning that we've migrated those docs to docs.zumasys.com. You'll find the docs there to be more up to date, and also completely open sourced--part of the migration included their move to a GitHub repo, where we're happy to take community contributions.
For reference, the article you mentioned is available at https://docs.zumasys.com/jbase/connectivity/jagent/introduction-to-jagent-rest-services/.
Update:
One of our engineers has a program that will use openssl to generate the tokens for you, which you can find at https://github.com/patrickp/wjwt.
You will need openssl installed on the machine and in the path.
The WJWT.TEST program shows the usage. The important piece is the SECRET.KEY which is your internal KEY you use to sign the payloads.
When a user first authenticates you create the token with SIGN. Claims are any items/fields you wish to save/store. Do NOT put sensitive data in here as it is viewable by anybody. The concept is we sign this with our key, give it back to the client. On future calls the client sends the token and we pull it and call the VERIFY function which basically re-signs the payload and validates the signatures match. This validates the payload was not manipulated.
Activities such as expiration you would build into your code.
Long term we plan to take this library and refactor the code into our MVDB Toolkit library with more functionality. That library is something we provide to jBASE customers at no additional charge.

Best practices for securing a public Java Spring-Boot web app

I've got a public Java web app created with Spring Boot. I'm ready to release it to production and wanted to get some tips on making sure that it is as secure as possible.
It is basically a search engine that uses Apache Lucene in the backed and a lot of javascript in the front end. I am planning on deploying it to Amazon Web Services, while using my local machine as a backup/test enviroment.
Users can search and browse data. The frontend calls REST endpoints using Javascripts XMLHttpRequest to query the backend for content and then displays it to the user.
The app is completely public and there is no user authentication as of yet.
The app also persists user requests to a database for tracking purposes.
Here's what I've done so far to secure it:
Make sure that the REST endpoints fully verify that the parameters given to them in the requests are valid.
What I plan on doing:
Using HTTPS.
Verifying that any put request or url requests have a reasonable size.
What I am considering adding:
Limiting the number of requests a user can make in a given time period. (not sure if Spring-Boot already has a facility to do this, or I should implement this myself)
Use some kind of API key scheme to make sure that my endpoints are only accessed by my front end. (not sure if this is effective and don't yet know how to do this).
Is there anything else that I should consider doing? Do the things I listed above make sense?
Would greatly appreciate any tips on this.

ArangoDB Foxx as a REST back-end

I am working on an app that would greatly benefit from Arangos' multi-model capabilities. Considering the app needs for the back-end, I have concluded that most, if not all, of it could be served through a REST API as to aid cleaner design for future development and integration with others. The API would then be consumed by several web and mobile front-end frameworks to handle the rest of the logic. The project will be developed with Javascript for the whole stack, using the NodeJS ecosystem.
.
The question itself:
Should and could one use arangodb + foxx to create the complete back-end stack for serving a REST API, thus avoiding another layer/component in the stack? e.g. express/hapi/loopback etc.
.
Major back-end requirements:
Authentication with roles
Sessions
Encryption
Complex querying (root of my initial thought, as to avoid multiple hops between DB and back-end)
Entry parsing, validation and sanitization
Scheduled tasks
.
Mainly looking for:
Known design advantages
Known design limitations
"Hidden" bottlenecks
Other possible future regrets
.
Side question (that might answer some of the above): Could Foxx utilise some of the node middleware available via npm?
Thanks in advance for your time!
You can use ArangoDB Foxx as the sole backend of your application, however it is important to keep the limitations of Foxx (compared to a general purpose JS environment like Node.js) in mind when doing this.
You mention encryption. While ArangoDB does support some cryptography (e.g. HMAC signing and PBKDF2 key derivation for passwords) the support is not as exhaustive and extensible as in Node.js. Also when using computationally expensive cryptography this will affect the performance of the database (because unlike Node.js Foxx is strictly synchronous and thus all operations should be considered blocking).
ArangoDB does not support role-based authentication out of the box but it is perfectly reasonable to implement it within ArangoDB using Foxx (just like you would implement it in Node.js, except you don't need to leave the database).
For sessions there are generally two possible approaches: you can either use a collection with session documents (using ArangoDB as your session backend) or you can keep your services stateless by using signed tokens (Foxx comes with JWT support out of the box).
Complex/stored queries and input validation (using the joi schema library originally written for hapi) are actually some of the main use cases of Foxx so those shouldn't be any problem whatsoever.
Foxx comes with its own mechanism for queueing tasks, which can also be scheduled ahead or recur periodically. However depending on your requirements an external job or message queue may be a better fit. The good thing is you can get started with the built-in job queue right away and still move on to a dedicated solution if the need arises during development.
As for middleware and NPM packages: Foxx is not fully compatible with Node.js code. While we provide a lot of compatibility code and try to keep the core modules compatible where possible, a big difference is that Node.js is generally used to perform asynchronous operations while in ArangoDB all operations are synchronous.
If you have Node.js modules that don't use crypto, file or network I/O and don't use asynchronous APIs (e.g. setTimeout, promises) they may be compatible with Foxx. A lot of utility libraries like lodash work with no problems at all. Even if you find that a module doesn't work it may be possible to write an adapter for it like we have done with mocha (integrated into Foxx) and GraphQL (via the graphql-sync package on NPM).
In my experience it is a good approach to put your Foxx service behind a thin layer of Node.js (e.g. a simple express application that mostly just proxies to your Foxx API) and/or to delegate some parts of your backend to standalone Node.js microservices (e.g. integration with non-HTTP services like e-mail or LDAP) which can be integrated in Foxx via HTTP.
One more thing: while a lot of existing express middleware likely isn't compatible with Foxx because of Node-specific dependencies and async logic, ArangoDB 3 will bring a new version of Foxx with support for middleware using a functionally express-compatible API.
I'm just starting to port my sails application to a FOXX application so I can answer some of your questions.
Role based authorization in ArangoDB is probably at too high a level than you want. In our case, we use an external service to authorize various web and service based applications at a very fine-grained level (much lower than a vertex or an edge). My feeling is that Authorization at that level will require you to write it yourself in javascript. If it's just CRUD on a per collection basis, then it shouldn't require much effort.
For authorization and sessions, I would look at the FOXX example found at: FOXX authorization-session example
It's not clear what you're asking about encryption. If you're talking about SSL connections, then that is natively supported (see arangodb end-points). As for internal encryption, there is a javascript crypto module ArangoDb crypto
Entry validation, etc. is supported by the javascript joi package.
Complex querying... Absolutely and getting even better in ArangoDB version 3.x. Traversals can be chained (go down using one edge collection, then up using another).
You're right on the ball when thinking about efficiency. This is the main reason we're going from sails to FOXX. In our case, we filter query results based on permissions from our external service. This means that we can't use ArangoDB native skip and limit support if these attributes are specified by the client. In sails, we have to bring back results in chunks and collect until we hit the appropriate skip and limit values. By moving to FOXX, we save a lot of network and other resources. We tested this by having sails forward the request to our prototype FOXX implementation. This scaled much better than the sails post-processing setup.
You can use NPM modules with restrictions. See Javascript Modules

Is Meteor an option, if i need an additional REST API?

I'm, going to write a web app, which should be CRUD accessible from both, the web and native mobile device apps. For the latter i'm definitely committed to a REST API. Is it possible to realize that with Meteor.com ? Would it be an option to use Meteor for just the web and a second REST interface to directly talk to the mongo? Since the meteor client listens for changes in the mongodb this should not cause conflicts, does it?
As of 2015, look at Gadi's answer for the Meteorpedia entry on REST APIs, and at krose's answer comparing REST API packages. Discussion for folding REST APIs into core is on Hackpad. This question is a duplicate of How to expose a RESTful service with Meteor, which has much better answers. -- Dan Dascalescu
Old answer (2012) below.
For adding RESTful methods on top of your data, look into the Collection API written for Meteor:
https://github.com/crazytoad/meteor-collectionapi
As for authentication for accessing the database, take a look at this project:
https://github.com/meteor/meteor/wiki/Getting-started-with-Auth
Both are definitely infantile in development, but you can create a RESTful API and integrate it with a mobile native client pretty easily.
There are a lot of duplicates of this question. I did a full write-on on this in Meteorpedia which I believe covers all issues:
http://www.meteorpedia.com/read/REST_API
The post reviews all 6 options for creating REST interfaces, from highest level (e.g. smart packages that handle everything for you) to lowest level (e.g. writing your own connectHandler).
Additionally the post covers when using a REST interface is the right or wrong thing to do in Meteor, references Meteor REST testing tools, and explains common pitfalls like CORS security issues.
If you are planning to develop a production application, then Meteor is not an option right now. Its under constant change, and there are still many common features it has to support before its ready to use, which will be quite some time.
For your Question, Somebody has already asked and answered the question about support for file uploading in meteor(also contains HTTP handing related information).
How would one handle a file upload with Meteor?

Web UI to a restful interface, good idea?

I am working on a experimental website (which is accessible through web browser) that will act as a front-end to a restful interface (a sub-system). The website will serve as an interface between a user and the restful interface, as it will make http requests to the restful interface for almost all database operations. Authentication will probably be done using openid and authorization for the database operations will be done via oAuth.
Just out of curiousity, is this a feasible solution or I should develop two systems that accesses the database in parallel (i.e. the website has its own data access logic, and the restful interface has another data access logic)? And what are the pros/cons if I insist on doing it this way (it is just an experiment project for me to learn things like how OpenID and oAuth work in real life anyway) besides there will be more database queries and http requests generated for each transaction?
Your concept sounds quite feasible. I'd say that you'll get some fairly good wins out of this approach. For starters you'll get a large degree of code reuse since you'll be able to put other front ends on top of the RESTful service. Additionally, you'll be able to unit test this architecture with relative ease. Finally, you'll be able to give 3rd party developers access to the same API that you use (subject possibly to some restrictions) which will be a huge win when it comes to attracting customers and developers to your platform.
On the down side, depending on how you structure your back end you could run into the standard problem of granularity. Too much granularity and you'll end up making lots of connections for very little amounts of data. Too little and you'll get more data than you need in some cases. As for security, you should be able to lock down the back end so that requests can only be made under certain conditions: requests contain an authorization token, api key, etc.
Sounds good, but I'd recommend that you do this only if you plan to open up the restful API for other UI's to use, or simply to learn something cool. Support HTML XML and JSON for the interface.
Otherwise, use a great MVC framework instead (asp.net MVC, rails, cakephp). You'll end up with the same basic result but you'll be "strongerly" typed to the database.
with a modern javascript library your approach is quite straightforward.
ExtJS now has always had Ajax support, but it is now able to do this via a REST interface.
So, your ExtJS user interface components populate receive a URL. They populate themselves via a GET to the URL, and store update via POST to the URL.
This has worked really well on a project I'm currently working on. By applying RESTful principles there's an almost clinical separation between the front & backends - meaning it would be trivial undertaking to replace other. Plus, the API barely needs documenting, since it's an implementation of an existing mature standard.
Good luck,
Ian
woow! A question from 2009! And it's funny to read the answers. Many people seem to disagree with the web services approach and JS front end - which has nowadays become kind of standard, known as Single Page Applications..
I think the general approach you outline is quite feasible -- the main pro is flexibility, the main con is that it won't protect clueless users against their own ((expletive deleted)) abuses. As most users are likely to be clueless, this isn't feasible for mass consumption... but, it's fine for really leet users!-)
So to clarify, you want to have your web UI call into your web service, which in turn calls into the database?
This is exactly the path I took for a recent project and I think it was a mistake because you end up creating a lot of extra work. Here's why:
When you are coding your web service, you will create a library to wrap database calls, which is typical. No problem there.
But then when you code your web UI, you will end up creating another library to wrap calls into the REST interface... because otherwise it will get cumbersome making all the raw HTTP calls.
So you essentially created 2 data access libraries, one to wrap DB and the other to wrap the Web service calls. This basically doubles the amount of work you do, because for every operation on a resource, you will end up implementing in both libraries. This gets tiring real fast.
The simpler alternative is to create a single library that wraps access to the database, as before, then use that library from BOTH the web UI and web service.
This is assuming that your web UI and web service reside on the same network and both have direct access to the backend database server (which was the case for me). In this setup having both go directly to the database is also a lot more efficient then having the UI go through the web service.