MongoDB NonDocker and Docker Nodes - mongodb

I have a 5 node MongoDB cluster installed non-Dockerized. I want to start adding nodes to this cluster but I want to use the Dockerized MongoDB (i.e. end result is to migrate Dockerized into the replica set and decommission the non-Dockerized nodes.)
When I do this, I am currently getting my added nodes stuck in STARTUP status so from my understanding the config files are not able to sync up.
Is there something that I need to do to prepare the cluster for the new nodes or is there some logs that I can delve into to find out why it is not moving to STARTUP2?

The data directory was not large enough thus the config files were unable to sync. As soon as I grew the data directory - all was well.

Related

ProxySQL vs MaxScale on Kubernetes

I'm looking to set up a writing proxy for our MariaDB database on Kubernetes. The problem we are currently having is that we only have one Write master on our 3 master galera cluster setup. So even though we have ours pods replication properly, if our first node goes down then our other two masters end up failing because they are not able to be written to.
I saw this was a possible option to use either ProxySQL or MaxScale for Write proxying, but I'm not sure if I'm reading their uses properly. Do I have the right idea looking to deploy either of these two applications/services on Kubernetes to fix my problem? Would I be able to write to any of the Masters in the cluster?
MaxScale will handle selecting which server to write to as long as you use the readwritesplit router and the galeramon monitor.
Here's an example configuration for MaxScale that does load balancing of reads but sends writes to one node:
[maxscale]
threads=auto
[node1]
type=server
address=node1-address
port=3306
[node2]
type=server
address=node2-address
port=3306
[node3]
type=server
address=node3-address
port=3306
[Galera-Cluster]
type=monitor
module=galeramon
servers=node1,node2,node3
user=my-user
password=my-password
[RW-Split-Router]
type=service
router=readwritesplit
cluster=Galera-Cluster
user=my-user
password=my-password
[RW-Split-Listener]
type=listener
service=RW-Split-Router
protocol=mariadbclient
port=4006
The reason writes are only done on one node at a time is because doing it on multiple Galera nodes won't improve write performance and it results in conflicts when transactions are committed (applications seem to rarely handle these).

Redis node MOVED exception

Redis in our production environment is in cluster mode, with 6 nodes, 3 master nodes and 3 slave nodes. When nodes are switched due to network and other reasons, Redis-client cannot automatically refresh these nodes, and will report exception
MOVED 5649 192.168.1.1:6379
The vertx-redis-client version I use is 4.2.1
My configuration looks like this:
RedisOptions options = new RedisOptions();
options.setType(RedisClientType.CLUSTER)
.setRole(RedisRole.MASTER)
.setUseReplicas(RedisReplicas.NEVER)
Every time I encounter this problem, I need to restart my application service to restore it. I want to ask if there is any way to make vertx-redis-client automatically refresh the node?
Thank you ~

Innodb Cluster upgradeMetadata on broken cluster

We have a cluster of 3 nodes, 2 of them are offline (missing) and I cannot get them to rejoin the cluster automatically only the master is Online.
Usually, you can use innodb admin:
var cluster = dba.getCluster();
but I cannot use the cluster instance because the metadata is not up to date. But I cannot upgrade the meta data because the missing members are required to be online to use dba.upgradeMetadata(). (Catch 22)
I tried to dissolve the cluster by using:
var cluster = dba.rebootClusterFromCompleteOutage();
cluster.dissolve({force:true});
but this requires the metadata to be updated as well.
Question is, how do I dissolve the cluster completely or upgrade the metadata so that I can use the cluster. methods.
This "chicken-egg" issue was fixed in MySQL Shell 8.0.20. dba.rebootClusterFromCompleteOutage() is now allowed in such situation:
BUG#30661129 – DBA.UPGRADEMETADATA() AND DBA.REBOOTCLUSTERFROMCOMPLETEOUTAGE() BLOCK EACH OTHER
More info at: https://mysqlserverteam.com/mysql-shell-adminapi-whats-new-in-8-0-20/
If you have a cluster where each node upgrades to the latest version of mysql and the cluster isn't fully operational and you need to update your metadata for mysqlsh, you'll need to use an older version of mysqlsh for example, https://downloads.mysql.com/archives/shell/ to get the cluster back up and running. Once it is up and running you can use the dba.upgrademetadata on the R/W node - make sure you update all of your routers or they will lose connection.

Can mongodb report failover status?

I have 3-node Replica Set in MongoDB. Is there any possibility to configure it to automatically run a shell script when failover happens or one of nodes goes down?
I believe this is not a role of database but maybe there are some mongo plugins/tools made for that purpose like Sentinel for Redis or MHA Manager for MySQL which report what node failed and what node become new master and so on.

MongoDB data replication in Kubernetes

I've been configuring pods in Kubernetes to hold a mongodb and golang image each with a service to load-balance. The major issue I am facing is data replication between databases. Replication controllers/replicasets do not seem to do what the name implies, but rather is a blank-slate copy instead of a replica of existing/currently running pods. I cannot seem to find any examples or clear answers on how Kubernetes addresses this, or does it even?
For example, data insertions being sent by the Go program are going to automatically load balance to one of X replicated instances of mongodb by the service. This poses problems since they will all be maintaining separate documents without any relation to one another once Kubernetes begins to balance the connections among other pods. Is there a way to address this in Kubernetes, or does it require a complete re-write of the Go code to expect data replication among numerous available databases?
Sorry, I'm relatively new to Kubernetes and couldn't seem to find much information regarding this.
You're right, a replica set is not a replica of another container, it's just a container with the same configuration spun up within the same logical unit.
A replica set (or deployment, which is the resource you should be using now) will have multiple pods, and it's up to you, the operator, to configure the mongodb part.
I would recommend reading this example of how to set up a replica set with multiple mongodb containers:
https://medium.com/google-cloud/mongodb-replica-sets-with-kubernetes-d96606bd9474#.e8y706grr