I've read Google Cloud Platform's article on deploying MongoDB. Using a sharding scheme, it is clear that the application will never read from a secondary MongoDB server:
Because the production application never reads data from a secondary server, the application never needs to handle the complexity of stale reads and eventual consistency.
My questions are:
Are secondary servers only useful for fault tolerance, i.e. as a backup in case of a failure in the primary server? Or are there performance benefits to having secondaries within the same shard region?
If so, considering the following:
Compute Engine disks have built-in redundancy to protect data against failures and to ensure data availability through maintenance events
Why are secondary servers needed at all on a fault-tolerant platform like Google Cloud?
Thank you!
To answer the two questions:
Other Benefits of Replica Sets
Replica sets also allow you to perform rolling updates to MongoDB so are useful for supporting updates.
It is also possible to allow some applications (e.g. a reporting application) to read from a secondary which reduces some load on the primary. Some details and use cases are available on the MongoDB site - https://docs.mongodb.com/v3.2/core/read-preference/
Requirement for Secondary Servers
The Google article states:
Barring a catastrophic outage, the MongoDB primary server should
always be in this region
By having multiple members in a replica set you are protecting yourself from this kind catastrophic outage. If you require very high availability then you want this level of protection.
MongoDB's own database as a service (Atlas) deploys replica set members to different Amazon Web Service Availbility Zones to protect against this kind of catastrophic outage.
Related
I'm looking at the options in ActiveMQ Artemis for data recovery if we lose an entire data centre. We have two data centres, one on the east coast and one on the west coast.
From the documentation and forums I've found four options:
Disk based methods:
Block based replication of the data directory between the sites, run Artemis on one site (using Ciphy or DRBD with protocol A). In the event of disaster (or fail over test), stop Artemis on the dead site, and start it on the live site.
The same thing but with both Artemis servers active, using an ha-policy to indicate the master and the slave using a shared store.
Network replication:
Like number 2, but with data replication enabled in Artemis, so Artemis handles the replication.
Mirror broker connections.
Our IT team uses / is familiar with MySQL replication, NFS, and rsync for our other services. We are currently handling JMS with a JBoss 4 server replicated over MySQL.
My reaction from reading the documentation is that high availability data replication is the way to go, but is there trade offs I'm not seeing. The only one that mentions DR and cross site is the mirror broker connection, but on the surface it looks like a more difficult to manage version of the same thing?
Our constraints are that we need high performance on the live cluster (on the order of 10s of thousands of messages per second, all small)
We can afford to lose messages (as few as possible preferably) in an emergency fail over. We should not lose messages in a controlled fail over.
We do not want clients in site A connecting to Artemis in site B - we will enable clients on site B in the event of a fail over.
The first thing to note is that the high availability functionality (both shared-store and replication - options #2 & #3) configured via ha-policy is designed for use in a local data center with high speed, low latency network connections. It is not designed for disaster recovery.
The problem specifically with network-based data replication for you is that replication is synchronous which means there's a very good chance it will negatively impact performance since every durable message will have to be sent across the country from one data center to the other. Also, if the replicating broker fails then clients will automatically fail-over to the backup in the other data center.
Using a solution based on block-storage (e.g. Ceph or DRDB) is viable, but it's really an independent thing outside the control of ActiveMQ Artemis.
The mirror broker connection was designed with the disaster recovery use-case in mind. It is asynchronous so it won't have nearly the performance impact of replication, and if the mirroring broker fails clients will not automatically fail-over to the mirror.
I’m looking to build a distributed Access Control system for a microservice platform. I’m considering using Mongodb as my database technology. My system design objectives are as follows:
Policy Enforcement should be distributed - If any given Policy
Enforcement Point (PEP) experiences downtime, only the application
that the PEP serves should be affected.
Policy Decisions should be
distributed - We don’t want the whole platform to experience downtime
because a central Policy Decision Point (PDP) is experiencing
downtime. We only want it to affect the application that it serves.
Policy Administration should be centralized - Creating a centralized
policy administration interface provides the ability for any system
(including a UI) to understand what rights an individual has, and by
establishing a common interface it allows us to more easily audit
changes to access across a whole platform.
Policy Information (context) is distributed - We don’t get to choose this if we are
building a distributed microservice platform. We can centralize the
retrieval of additional context by aggregating data that is needed to
make access control decisions into a single place, but the data
sources are still distributed.
I’m considering building a system like the one shown below. The idea is that Access Policies are administered by a central Policy Admin API. This API manages Policies that are persisted to a mongodb cluster with a 3 member replica set backing it. I would like other APIs in the platform to have a dedicated policy-query-api (Policy Decision Point) that is deployed along side it to make Access Control decisions pertinent to the API. The idea is that if any one of the policy-query-apis goes down, only the API that it serves will be affected.
I want changes to Policies to be governed by the Policy Admin API and I would like the changes to be replicated across each mongo instance that is used by each of the policy-query-apis.I don’t want the mongo replicas for each policy-query-api to affect a write to the primaries.
I also don’t need immediate data consistency (less than 5 sec latency), but I would like the data replication to be handled at the database layer if possible. The technology is already built to handle this and I don’t want to reinvent the wheel at the application layer if possible.
I’ve looked at the documentation on Replica Set Members and I’ve pretty thoroughly reviewed the documentation on Replica Sets in Mongo. It seems like having a Hidden Member or Delayed Member would be a good fit for my use case. Do you agree? Also, I’m concerned about the 50 member replica set limit 1. Since each one of these replicas would serve an API in my platform, if there exceeded more than 50 microservices (which is quite likely) how would I manage replication like this?
Just so that I understand, you are asking about:
one standalone (?? your picture suggests standalone but you are asking about 50 node RS limit) node per application, data mirrored to standalone from the master RS
the application only queries its local standalone
MongoDB provides read preference nearest for the use case of reading data from local nodes. Importantly the nearest read preference still provides availability if your local node is unavailable - the next closest (roughly) node will be used in this case. Your proposed architecture would take the application down every time its local database node needs to be restarted for version upgrades.
You may also look into tag sets.
Additionally, MongoDB allows specifying priorities on nodes for election purposes. If you put all of your MongoDB nodes into the same RS, you can use priorities to have one of the 3 designated "main" servers be primaries if any of them are available.
Google has ]this cool tool kubemci - Command line tool to configure L7 load balancers using multiple kubernetes clusters with which you can basically have a HA multi region Kubernetes setup. Which is kind of cool.
But let's say we have an basic architecture like this:
Front end is implemented as SPA and uses json API to talk to backend
Backend is a set of microservices which use PostgreSQL as a DB storage engine.
So I can create two Kubernetes Clusters on GKE, put both backend and frontend on them (e.g. let's say in London and Belgium) and all looks fine.
Until we think about the database. PostgreSQL is single master only, so it must be placed in one of the regions only. And If backend from London region starts to talk to PostgreSQL in Belgium region the performance will really be poor considering the 6ms+ latency between those regions.
So that whole HA setup kind of doesn't make any sense? Or am I missing something? One option to slightly mitigate the issue is would be have a readonly replica in the the "slave" region, and direct read-only queries there (is that even possible with PostgreSQL?)
This is a classic architecture scenario that has no easy solution. Making data available in multiple regions is a challenging problem that major companies spend a lot of time and money to solve.
PostgreSQL does not natively support multi-master writes. Your idea of a replica located in the other region with logic in your app to read and write to the correct database would work. This will give you fast local reads, but slower writes in one region. It's also more complicated code in you app and more work to handle failover of the master. Bandwidth and costs can also be problems with heavy updates.
Use 3rd-party solutions for multi-master Postgres (like Postgres-BDR by 2nd Quadrant) to offload the work to the database layer. This can get expensive and your application still has to manage data conflicts from two regions overwriting the same data at the same time.
Choose another database that supports multi-regional replication with multi-master writes. Cassandra (or ScyllaDB) is a good choice, or hosted options like Google Spanner, Azure CosmosDB, AWS DynamoDB Global Tables, and others. An interesting option is CockroachDB which supports the PostgreSQL protocol but is a scalable relational database and supports multiple regions.
If none of these options work, you'll have to create your own replication system. Some companies do this with a event-sourced / CQRS architecture where every write is a message sent to a central log, then applied in every location. This is a more work but provides the most flexibility. At this point you're also basically building your own database replication system.
If you have multi cluster ingress set up on two clusters in different regions, then the multi cluster ingress will only send traffic to the closest region to the user.
If the closest region is down, this is when traffic will be routed to the cluster in the other region.
So using the example you have provided, if there is traffic being sent to the backend and this user is closer to London, then traffic sent by this user will always be sent to London as long as the Region is up and running.
In regards dealing with latency, you will have to deal with the latency in this case as you cannot create a read replica within another region.
The benefit of this functionality (multi-cluster ingress) is that if one region goes down, then you have another region to route the traffic to.
We are evaluating different alternatives for multi-tenancy in our platform. We think that one database per customer is the way to go as data structure and requirements are completely different from one customer to another, and we want to keep them as isolated as possible.
However we are facing the question of how to manage the connection to multiple databases. We don't want to have one app instance per customer. Instead we want to have a pool of app instances handling requests for all our customers and use the correct database depending on the customer.
Our concern is if keeping connections open to many (maybe thousands) of database will cause a performance issue. We are actually worried about memory usage, so we are wondering what's the overhead on client side when performing a connection to the MongoDB server.
Also we are thinking about moving the database access to a different service, which is going to be responsible of handling the database connection for all customers. In this case, is there an existing tool that allows to do that kind of "multiplexing" of MongoDB databases?
Some additional notes:
We discarded sharding. It won't fit our needs. We need different databases.
Databases will be in different servers with reserved resources. This means all databases run its own mondod process and we need different connections.
We use Java driver.
I'm working on a project where a Postgresql database needs to be shared across several physical locations. Each location has limited connectivity, and may only have access to the outside world once or twice a day. So the database has to be available locally at each location, but must also synchronize with the master database when possible.
I am not yet familiar with replication or clustering. Are these good solutions? Or is there a better way of doing it? I would appreciate some advice on this. :)
NOTE: clashing of primary keys from different locations would not be an issue, this has been taken care of.
If the remote locations require read-only access to the data, you can set up asynchronous replication fairly easily using log shipping, which is a built-in feature of PostgreSQL. In this configuration, the master server drops WAL (write-ahead log) files to a shared location where the remote servers can periodically connect and read the logs to bring themselves up to date.
If all servers are performing writes independently, what you're looking for is asynchronous multi-master replication. The Postgres docs mention Bucardo and rubyrep as options for accomplishing this. According to the docs, both are limited to master-to-master replication (or master to multiple slaves), but Bucardo supposedly has true multi-master replication planned for version 5.0, and rubyrep mentions a method for keeping multiple servers synchronized.
(I have servers using PostgreSQL's log shipping and streaming replication features, but I have no direct experience with Bucardo or rubyrep.)