High availability feature within DB2 on cloud - db2

As per the documentation , the high availablity feature in DB2 on cloud offers an additional redundant node within the same data center ( availability zone ) only. Why cant HA be provided atleast across different AZ's within the same region?

As Gilbert said, this is due to latency. The nodes are placed in the same datacenter because the HA replication is synchronous. They are kept on different power and networking pods to provide a level of isolation while still keeping them physically close.
For further physical isolation, there is the Disaster Recovery feature, where a node is added in a different datacenter altogether. This replication is asynchronous and the failovers are triggered manually by the user.

Related

Sharding with replication

Sharding with replication]1
I have a multi tenant database with 3 tables(store,products,purchases) in 5 server nodes .Suppose I've 3 stores in my store table and I am going to shard it with storeId .
I need all data for all shards(1,2,3) available in nodes 1 and 2. But node 3 would contain only shard for store #1 , node 4 would contain only shard for store #2 and node 5 for shard #3. It is like a sharding with 3 replicas.
Is this possible at all? What database engines can be used for this purpose(preferably sql dbs)? Did you have any experience?
Regards
I have a feeling you have not adequately explained why you are trying this strange topology.
Anyway, I will point out several things relating to MySQL/MariaDB.
A Galera cluster already embodies multiple nodes (minimum of 3), but does not directly support "sharding". You can have multiple Galera clusters, one per "shard".
As with my comment about Galera, other forms of MySQL/MariaDB can have replication between nodes of each shard.
If you are thinking of having a server with all data, but replicate only parts to readonly Replicas, there are settings for replicate_do/ignore_database. I emphasize "readonly" because changes to these pseudo-shards cannot easily be sent back to the Primary server. (However see "multi-source replication")
Sharding is used primarily when there is simply too much traffic to handle on a single server. Are you saying that the 3 tenants cannot coexist because of excessive writes? (Excessive reads can be handled by replication.)
A tentative solution:
Have all data on all servers. Use the same Galera cluster for all nodes.
Advantage: When "most" or all of the network is working all data is quickly replicated bidirectionally.
Potential disadvantage: If half or more of the nodes go down, you have to manually step in to get the cluster going again.
Likely solution for the 'disadvantage': "Weight" the nodes differently. Give a height weight to the 3 in HQ; give a much smaller (but non-zero) weight to each branch node. That way, most of the branches could go offline without losing the system as a whole.
But... I fear that an offline branch node will automatically become readonly.
Another plan:
Switch to NDB. The network is allowed to be fragile. Consistency is maintained by "eventual consistency" instead of the "[virtually] synchronous replication" of Galera+InnoDB.
NDB allows you to immediately write on any node. Then the write is sent to the other nodes. If there is a conflict one of the values is declared the "winner". You choose which algorithm for determining the winner. An easy-to-understand one is "whichever write was 'first'".

RabbitMQ Best Practices for High Availability on Cloud

I'm planning to deploy RabbitMQ on Kubernetes Engine Cluster. I see there are two kinds of location types i.e. 1. Region 2. Zone
Could someone help me understand what kind of benefits I can think of respective to each location types? I believe having multi-zone set up
could help enhancing the network throughout. While multi-region set up can ensure an undisputed service even if in case of regional failure events. Is this understanding correct? I'm looking at relevant justifications to choose a location type. Please help.
I'm planning to deploy RabbitMQ on Kubernetes Engine Cluster. I see there are two kinds of location types:
Region
Zone
Could someone help me understand what kind of benefits I can think of respective to each location types?
A zone (Availability Zone) is typically a Datacenter.
A region is multiple zones located in the same geographical region. When deploying a "cluster" to a region, you typically have a VPC (Virtual private cloud) network spanning over 3 datacenters and you spread your components to those zones/datacenters. The idea is that you should be fault tolerant to a failure of a whole _datacenter, while still have relatively low latency within your system.
While multi-region set up can ensure an undisputed service even if in case of regional failure events. Is this understanding correct? I'm looking at relevant justifications to choose a location type.
When using multiple regions, e.g. in different parts of the world, this is typically done to be near the customer, e.g. to provide lower latency. CDN services is distributed to multiple geographical locations for the same reason. When deploying a service to multiple regions, communications between regions is typically done with asynchronous protocols, e.g. message queues, since latency may be too large for synchronous communication.

Is Google Cloud SQL high availability really improving reliability?

I want to create a Google Cloud SQL instance but I am not sure about choosing high availability or not.
From what I understand the failover switch can take a few minutes, it is not instantly done, and the cost is roughly 2x the cost of a regular instance.
The failover is triggered only in case of zone outage, not in case of db issues. Since the monthly uptime is 99.95 at least, that makes an outage possibility of 21mins per month maximum. A failover can take up to 5 mins, and we can suppose the 21minutes downtime is not happening on a single event, therefore is there a real need to subscribe to High Availability?
A full zone outage is probably quite rare, so if you don't care about it, an HA instance might indeed not be needed.
One advantage of HA is that failover can be faster than restart. We've experienced cases when the primary instance gets "stuck" and a restart would take up to 30 minutes (GCP ticket). In such cases it's faster to failover to an HA instance.
(Before October 2019, HA failover instances could also be used for read queries, and thus avoid the need for an additional read replica. With the change from binlog-based replication to disk-based replication this is not the case anymore.)
HA Failover is not just for a full zone outage. It kicks in whenever the primary instance stops responding for more than a minute.
The fact that it is quicker than a restart, more reliable than a restart, and automatic means it keeps your outages much shorter when mysql crashes.
Also, don't you need HA for the SLA to apply, without HA you're not multizone, and therefore you can't meet the defintion of "Downtime"
"Downtime" means (ii) with respect to Cloud SQL Second Generation for
MySQL, Cloud SQL for SQL Server, and Cloud SQL for PostgreSQL: all
connection requests to a Multi-zone Instance fail.
https://cloud.google.com/sql/sla

Single Kubernetes/OpenShift cluster/instance across datacenters?

With the understanding that Ubernetes is designed to fully solve this problem, is it currently possible (not necessarily recommended) to span a single K8/OpenShift cluster across multiple internal corporate datacententers?
Additionally assuming that latency between data centers is relatively low and that infrastructure across the corporate data centers is relatively consistent.
Example: Given 3 corporate DC's, deploy 1..* masters at each datacenter (as a single cluster) and have 1..* nodes at each DC with pods/rc's/services/... being spun up across all 3 DC's.
Has someone implemented something like this as a stop gap solution before Ubernetes drops and if so, how has it worked and what would be some considerations to take into account on running like this?
is it currently possible (not necessarily recommended) to span a
single K8/OpenShift cluster across multiple internal corporate
datacententers?
Yes, it is currently possible. Nodes are given the address of an apiserver and client credentials and then register themselves into the cluster. Nodes don't know (or care) of the apiserver is local or remote, and the apiserver allows any node to register as long as it has valid credentials regardless of where the node exists on the network.
Additionally assuming that latency between data centers is relatively
low and that infrastructure across the corporate data centers is
relatively consistent.
This is important, as many of the settings in Kubernetes assume (either implicitly or explicitly) a high bandwidth, low-latency network between the apiserver and nodes.
Example: Given 3 corporate DC's, deploy 1..* masters at each
datacenter (as a single cluster) and have 1..* nodes at each DC with
pods/rc's/services/... being spun up across all 3 DC's.
The downside of this approach is that if you have one global cluster you have one global point of failure. Even if you have replicated, HA master components, data corruption can still take your entire cluster offline. And a bad config propagated to all pods in a replication controller can take your entire service offline. A bad node image push can take all of your nodes offline. And so on. This is one of the reasons that we encourage folks to use a cluster per failure domain rather than a single global cluster.

Hosting and scaling mongodb

I'm looking for a hosting service to host my mongodb database, such as MongoLab-MongoHQ-Heroku-AWS EBS, etc.
What I need is to find one of this services (or another) that provides auto-scaling my storage size.
Is there a way (service) to auto-scale mongodb? How?
There are many hosting providers for MongoDB that provide scaling solutions.
Storage size is only one dimension to scaling; there are other resources to monitor like memory, CPU, and I/O throughput.
There are several different approaches you can use to scale your MongoDB storage size:
1) Deploy a MongoDB replica set and scale vertically
The minimum footprint you would need is two data bearing nodes (a primary and a secondary) plus a third node which is either another data node (secondary) or a voting-only node without data (arbiter).
As your storage needs change, you can adjust the storage on your secondaries by temporarily stopping the mongod service and doing the O/S changes to adjust storage provisioning on your dbpath. After you adjust each secondary, you would restart mongod and allow that secondary to catch up on replication sync before changing the next secondary. Lastly, you would step down and upgrade the primary.
2) Deploy a MongoDB sharded cluster and scale horizontally.
A sharded cluster allows you to partition databases across a number of shards, which are typically backed by replica sets (although technically a shard can be a standalone mongod). In this case you can add (or remove) shards based on your requirements. Changing the number of shards can have a notable performance impact due to chunk migration across shards, so this isn't something you'd do too reactively (i.e. far more likely on a daily or weekly basis rather than hourly).
3) Let a hosted database-as-a-service provider take care of the details for you :)
Replica sets and sharding are the basic building blocks, but hosted providers can often take care of the details for you. Armed with this terminology you should be able to approach hosting providers and ask them about available plan options and costing.