The answers to Does Sails.js or Meteor.js work with ArangoDB or OrientDB? do not contain info specifically about Meteor/Arangodb combination.
This is what I need to know: how close is ArangoDB to state of being a drop-in replacement for Mongo in Meteor?
The reasons I would prefer to use Arango:
friendlier license (Apache vs Mongo's AGPL)
graph db features built-in (I'm going to need that)
ACID transactions
Out of the box, Meteor only supports MongoDB directly, so there is no real "drop-in" alternative to MongoDB if you are a bleeding heart Meteor developer.
Because the Meteor server is built on top of Node.js you can simply use the JavaScript driver for ArangoDB to talk to ArangoDB from your server-side Meteor code. Alternatively you can simply use the ArangoDB HTTP API directly.
For a less database-specific solution you can look into Apollo as BennyB pointed out, but keep in mind that Apollo communicates using GraphQL, which while offering a lot of flexibility to frontend development also creates certain limitations for backend development, especially when it comes to optimizing queries for performance. A naive implementation of a GraphQL schema treats the database purely as a key-value storage, which will not play into the strengths you're interested in (specifically transactions won't be available this way).
Related
If I want to develop a Create, Read, Update, Delete (CRUD) system, what should I consider when choosing an API?
What are the pros and cons of NodeJS API and Java API?
Why NodeJS's single thread can make servers more scalable than multi-thread?
What databases can be accessed using Java API's Java Database Connectivity?
What are the similarities between NodeJS API and Java API?
I think the best choose for API for your CRUD development is depends on what type of your application. if you want a fast and reliable app, you should choose node.js for your API. But if you can ignore that and want to more like advance feature, you can pick java for your API.
Pros in NodeJS compare to Java API are Asynchronous, Very fast, and highly scalable. Cons in NodeJS compare to Java API are not suitable for CPU-intensive task, hard to operate when using relational database, and doesn't provide scalability.
Because single threaded have advantage such as can handle many client with ease, eliminate the need of more thread because of the event loop, and use the least threads to reduce memory and resource usage.
Oracle and MySQL.
About similarities we think Java API can do what NodeJS do but NodeJS can do all what Java API can do.
I am trying to implement a project on the cloud for as minimum resources as possible (cpu, ram usage) and be able to handle medium to large number of requests through the database.
For the database part, I have chosen mongodb but for the backend I am between golang or quarkus.
Golang has many advantages but the only thing that concerns me is the interaction with mongodb. Mongodb official driver for Golang doesn't support reactive interface and despite the fact that golang can be easily implemented to be async I am afraid that mongodb will be my bottleneck.
Quarkus looks very promising, it is supported by Red Hat and it has been built to address many issues for the cloud era. It has been built on top of async servers and supports reactive communication with mongodb.
What is your opinion of the above? What would you suggest?
Thanks,
I'm doing these tests. Reactive quarkus and golang. So far go is much better
I'm searching a simple implementation to push changes from a free relational database (PostgreSQL, MySQL, SQLite, etc.) to clients' browsers via WebSocket or WebPush.
I want to avoid all the server-side JavaScript ecosystem (Node.js, npm & cie) and the NoSQL databases.
All must be hosted in the servers of my company, I can't use third-party services.
I found these interesting solutions :
http://initd.org/psycopg/articles/2010/12/01/postgresql-notifications-psycopg2-eventlet/ [with Python]
https://gist.github.com/drocco007/6e44ac1a581546c16e67 [the same one slightly improved]
https://coussej.github.io/2015/09/15/Listening-to-generic-JSON-notifications-from-PostgreSQL-in-Go/ [with Go]
Do you know other ways to get this done?
Is PostgreSQL the more suitable free RDBMS to do this?
Can it be accomplished with a SQLite database?
Can Apache or NGinx abilities be used to achieve this?
Update 01/23/17: I wrote an application called postgresql2websocket in order to send PostgreSQL notifications over websockets using Python 3 with asyncio + aiohttp + asyncpg https://github.com/frafra/postgresql2websocket; you could combine it with PostgREST in order to have both standard REST APIs and realtime updates using WebSockets.
As far I know, there is no HTTP server extension for using SQL databases with Websockets without anything in the middle.
You can use Python on the server side, like this: Real Time Web Apps with (just) Python and Postgres. I think it could be improved thanks to aiopg. If you don't need Websockets, you can just use ngx_postgres.
If you like Django, Django Channels will be probably included in Django 1.10 (Redis/in-memory/... layer for channels and SQL backend).
You could use SQLite, but bear in mind that you have to implement a separate server side publish/subscribe mechanism (like Django channel does), because SQLite doesn't have one.
If you're just interested in pub/sub over Websockets, you could use Webdis (Redis-based solution): it would be probably lighter than a full SQL database.
I am working on an app that would greatly benefit from Arangos' multi-model capabilities. Considering the app needs for the back-end, I have concluded that most, if not all, of it could be served through a REST API as to aid cleaner design for future development and integration with others. The API would then be consumed by several web and mobile front-end frameworks to handle the rest of the logic. The project will be developed with Javascript for the whole stack, using the NodeJS ecosystem.
.
The question itself:
Should and could one use arangodb + foxx to create the complete back-end stack for serving a REST API, thus avoiding another layer/component in the stack? e.g. express/hapi/loopback etc.
.
Major back-end requirements:
Authentication with roles
Sessions
Encryption
Complex querying (root of my initial thought, as to avoid multiple hops between DB and back-end)
Entry parsing, validation and sanitization
Scheduled tasks
.
Mainly looking for:
Known design advantages
Known design limitations
"Hidden" bottlenecks
Other possible future regrets
.
Side question (that might answer some of the above): Could Foxx utilise some of the node middleware available via npm?
Thanks in advance for your time!
You can use ArangoDB Foxx as the sole backend of your application, however it is important to keep the limitations of Foxx (compared to a general purpose JS environment like Node.js) in mind when doing this.
You mention encryption. While ArangoDB does support some cryptography (e.g. HMAC signing and PBKDF2 key derivation for passwords) the support is not as exhaustive and extensible as in Node.js. Also when using computationally expensive cryptography this will affect the performance of the database (because unlike Node.js Foxx is strictly synchronous and thus all operations should be considered blocking).
ArangoDB does not support role-based authentication out of the box but it is perfectly reasonable to implement it within ArangoDB using Foxx (just like you would implement it in Node.js, except you don't need to leave the database).
For sessions there are generally two possible approaches: you can either use a collection with session documents (using ArangoDB as your session backend) or you can keep your services stateless by using signed tokens (Foxx comes with JWT support out of the box).
Complex/stored queries and input validation (using the joi schema library originally written for hapi) are actually some of the main use cases of Foxx so those shouldn't be any problem whatsoever.
Foxx comes with its own mechanism for queueing tasks, which can also be scheduled ahead or recur periodically. However depending on your requirements an external job or message queue may be a better fit. The good thing is you can get started with the built-in job queue right away and still move on to a dedicated solution if the need arises during development.
As for middleware and NPM packages: Foxx is not fully compatible with Node.js code. While we provide a lot of compatibility code and try to keep the core modules compatible where possible, a big difference is that Node.js is generally used to perform asynchronous operations while in ArangoDB all operations are synchronous.
If you have Node.js modules that don't use crypto, file or network I/O and don't use asynchronous APIs (e.g. setTimeout, promises) they may be compatible with Foxx. A lot of utility libraries like lodash work with no problems at all. Even if you find that a module doesn't work it may be possible to write an adapter for it like we have done with mocha (integrated into Foxx) and GraphQL (via the graphql-sync package on NPM).
In my experience it is a good approach to put your Foxx service behind a thin layer of Node.js (e.g. a simple express application that mostly just proxies to your Foxx API) and/or to delegate some parts of your backend to standalone Node.js microservices (e.g. integration with non-HTTP services like e-mail or LDAP) which can be integrated in Foxx via HTTP.
One more thing: while a lot of existing express middleware likely isn't compatible with Foxx because of Node-specific dependencies and async logic, ArangoDB 3 will bring a new version of Foxx with support for middleware using a functionally express-compatible API.
I'm just starting to port my sails application to a FOXX application so I can answer some of your questions.
Role based authorization in ArangoDB is probably at too high a level than you want. In our case, we use an external service to authorize various web and service based applications at a very fine-grained level (much lower than a vertex or an edge). My feeling is that Authorization at that level will require you to write it yourself in javascript. If it's just CRUD on a per collection basis, then it shouldn't require much effort.
For authorization and sessions, I would look at the FOXX example found at: FOXX authorization-session example
It's not clear what you're asking about encryption. If you're talking about SSL connections, then that is natively supported (see arangodb end-points). As for internal encryption, there is a javascript crypto module ArangoDb crypto
Entry validation, etc. is supported by the javascript joi package.
Complex querying... Absolutely and getting even better in ArangoDB version 3.x. Traversals can be chained (go down using one edge collection, then up using another).
You're right on the ball when thinking about efficiency. This is the main reason we're going from sails to FOXX. In our case, we filter query results based on permissions from our external service. This means that we can't use ArangoDB native skip and limit support if these attributes are specified by the client. In sails, we have to bring back results in chunks and collect until we hit the appropriate skip and limit values. By moving to FOXX, we save a lot of network and other resources. We tested this by having sails forward the request to our prototype FOXX implementation. This scaled much better than the sails post-processing setup.
You can use NPM modules with restrictions. See Javascript Modules
I'm looking for a common data access framework that will provide portability across various nosql databases like SimpleDB, Azure Tables, Cassandra, CouchDB, MongoDb, etc. I'm building an app and would like my customers to be able to use which ever nosql store they want.
In a more relational scenario, I'd use Linq over nHibernate or Entity Framework, but I haven't found an equivalent framework for nosql databases. All I've found is database specific API's even though there seem to be significant commonality. Does one exist? Preferably one with LINQ.
No these things are too different and too specific (at least right now). If you wanted something really simple, like just a wrapper on an object that is only accessed by ID, then you may have a hope. In fact, if you look at NoRM, it may be possible to adapt that to various providers.
However, outside of a small core set of features, these "NoSQL" databases are quite different in many regards. I mean, how do you implement the various map/reduce functions agnostically? How do you implement atomic operations when they support different atomic operations?
Either way, we're way too early in the NoSQL life-cycle to have an agnostic framework for all of this. Azure basically dropped their NoSQL offering in favor of "hosted SQL server". MongoDB is maybe 20 months old, CouchDB is still on version 0.11.x, SimpleDB is less than 24 months old, Cassandra is on version 0.6.2 and has maybe been in regular use for a couple of years.
We're just not there yet.
A common query language (called UnQL) is being developed: http://www.unqlspec.org/display/UnQL/Home
There are LINQ providers for MongoDB but I don't think that there is a generic .net linq provider to 'all' nosql db's .
Some people have contemplated about a generic nosql query language: http://nosql.mypopescu.com/post/731261002/a-common-nosql-query-language
If you only have basic persistence persistence requirements, I maintain a common caching API with providers for Memcached, Redis, InMemory and FileSystem caching.
It only supports Redis, but I have a C# Redis Client that has a very familiar C# API. It natively supports persisting POCO types and exposes all of Redis's advanced server-side data-structures as native .NET IList, ICollection data structures so they can easily be used in existing C# APIs like LINQ, etc.