Is there an easy way to sync MongoDB data with Elastic if we are using a Loopback 4 application? (In a NodeJS Express application we can easily do this using mongoosastic plugin.)
Currently in loopback a model is bound to one data source which is either a mongo or elastic. My ultimate goal is to write POST, PUT, DELETE methods in mongo (and replicate in elastic) and GET methods to use Elastic as datasource.
I would highly recommend to decouple the syncoronization part from the core routes of your application. You could leverage the MongoDB Change streams to listen for change and update your elasticsearch. This could be a simple app running on a totally different server which does the syncronization while you carry on serving your requests. This will make the overall process and architecture more durable and fault tolerant. You can read more about this on following link
Mongodb to elasticsearch
Related
Currently thinking of using MongoDB and MongoDB Express as cache and interface on edge servers, solving data issues and making an overall architecture an distributed system.
The 2 containers (from Mongodb Docker) is deployed via Docker-Compose. And have been tested on Dev Envrionment.MonogoDB is the NoSql Database, and Express is the interface for access local cache.
Question:
Is it safe to deploy to Production, considering it is a third party container?
Is there any known or foreseeable issues from your expertise?
I am planning to implement a mongoDB storage backend for JanusGraph. The reason for picking MongoDB is mainly because all of our existing infrastructure and services use MongoDB, so it would be less net new maintenance requirement. Where do I get started? Is there a list of APIs that JanusGraph provides that need to be implemented by a custom backend? I couldn't find any documentation.
As of now Janusgraph supports only limited number of storage-backends as of now (Cassandra, HBase, BigTable, Berkeley). You can find more info here https://docs.janusgraph.org/storage-backend/.
Is it possible, and if so how, to use an Azure MongoDB as the backend for my Meteor Application.
I have added the connection string from my database into the MONGO_URL variable with no success. I have found some previous threads over Stack Overflow and on here about incompatibility related to oplog errors, but they seem to be using DocumentDB instead of Azure's MongoDB (which I think is newer than a few years ago).
In your example, you're actually using DocumentDB with MongoDB compatibility. You're not using native MongoDB (nor is this native MongoDB as-a-service).
DocumentDB (even with MongoDB compat) does not provide an oplog. And since Meteor has a dependency on reading the oplog, you wouldn't be able to point Meteor at DocumentDB.
In your case, you'd need to either run native MongoDB on your own (e.g. in VMs) or take advantage of a 3rd-party MongoDB hosting solution which provides MongoDB support within the same region as your app. (ok, yes, you can run your app in a different region, but you'd see latency along with data egress charges).
I have created a sharded environment, I am using two mongos. Is their a way I can load balance between the two "mongos",Because presently I found the Mongo client uses one of the two.Or do I have to write my own load balancer?
The recommendation would be having a mongos per application server and not implementing your own load balancer.
A query may not return the whole result in one batch, in which case the mongos will store some information associated with the cursor. If subsequent requests to iterate using the cursor are not redirected to the same mongos, then you will get errors. A load balancer would need to understand the MongoDB binary wire protocol to guarantee that scenario is handled properly.
See:
http://craiggwilson.com/2013/10/21/load-balanced-mongos/
We had the same question. Looks like Java Mongo Client can do the failover connection by itself.
Please see the answers for this question.
MongoDB load balancing and failover of query routers
I've got a group of servers that currently use both memcached and repcached side by side (listening on different ports). The memcached service is used to store local data that doesn't need to be shared. The repcached instance is used to allow pairs of servers to collaborate.
When I found Couchbase I was really excited because it looks like it would allow me to:
Make some data persistent
Share with more than two nodes
Leave most of my code as-is since it uses the memcached API
So I installed Couchbase but I've run into a problem--it doesn't look like there's a way to setup two clusters on the same server. I'd like one cluster that doesn't share with any other server and a second cluster that does share with other servers.
Yes, I could setup several dedicated servers for Couchbase to create different clusters but I've got plenty of CPU + ram to spare on the servers that are currently running memcached + repcached so I'd prefer to just replace those services with Couchbase.
Is it possible to run two instances of Couchbase on the same host? I realize I'd have to change some ports around. I just haven't seen anyone talking about doing anything like this so I'm thinking the answer is "no"... but I had to ask because it looks like Couchbase would be perfect for my needs.
If this won't work then I'd be interested in any alternative suggestions. For example, one idea I had was using Memcached + MemcacheDB to emulate a persistent non-shared Couchbase cluster. However, I don't like the fact that MemcacheDB doesn't support expiring records and I'd rather not have to write a routine to delete millions of records each month (and then wonder if performance will degrade over time).
Any thoughts would be appreciated. :-)
The best solution here is probably to run a single instance of Couchbase and create one memcached bucket and one Couchbase bucket. The memcached bucket won't have persistence and will function exactly like memcached. The other bucket will have persistence and supports the memcached api. You can create as many buckets as you want in a single Couchbase server.
Your other option is to virtualize and run a Couchbase server on each vm.