Innodb Cluster upgradeMetadata on broken cluster - mysql-innodb-cluster

We have a cluster of 3 nodes, 2 of them are offline (missing) and I cannot get them to rejoin the cluster automatically only the master is Online.
Usually, you can use innodb admin:
var cluster = dba.getCluster();
but I cannot use the cluster instance because the metadata is not up to date. But I cannot upgrade the meta data because the missing members are required to be online to use dba.upgradeMetadata(). (Catch 22)
I tried to dissolve the cluster by using:
var cluster = dba.rebootClusterFromCompleteOutage();
cluster.dissolve({force:true});
but this requires the metadata to be updated as well.
Question is, how do I dissolve the cluster completely or upgrade the metadata so that I can use the cluster. methods.

This "chicken-egg" issue was fixed in MySQL Shell 8.0.20. dba.rebootClusterFromCompleteOutage() is now allowed in such situation:
BUG#30661129 – DBA.UPGRADEMETADATA() AND DBA.REBOOTCLUSTERFROMCOMPLETEOUTAGE() BLOCK EACH OTHER
More info at: https://mysqlserverteam.com/mysql-shell-adminapi-whats-new-in-8-0-20/

If you have a cluster where each node upgrades to the latest version of mysql and the cluster isn't fully operational and you need to update your metadata for mysqlsh, you'll need to use an older version of mysqlsh for example, https://downloads.mysql.com/archives/shell/ to get the cluster back up and running. Once it is up and running you can use the dba.upgrademetadata on the R/W node - make sure you update all of your routers or they will lose connection.

Related

Why does my Tarantool Cartridge retrieve data from router instance sometimes?

I wonder why my tarantool cartridge cluster is not woring as it should.
I have a cartridge cluster running on kubernetes and cartridge image is generated from cartridge cli cartridge pack, and no changes were made to the those generated files.
Kubernetes cluster is deployed via helm with the following values:
https://gist.github.com/AlexanderBich/eebcf67786c36580b99373508f734f10
Issue:
When I make requests from pure php tarantool client, for example SELECT sql request it sometimes retrieves the data from storage instances but sometimes unexpectedly it responds to me with the data from router instance instead.
Same goes for INSERT and after I made same schema in both storage and router instances and made 4 requests it resulted in 2 rows being in storage and 2 being in router.
That's weird and as per reading the documentation I'm sure it's not the intended behaviour and I'm struggling to find the source of such behaviour and hope for your help.
SQL in tarantool doesn't work in cluster mode e.g. with tarantool-cartridge.
P.S. that was the response to my question from tarantool community in tarantool telegramchat

adding a new kubernetes node pool to existing cluster with regular channel as release channel

I'm trying to add a new node pool into an existing GKE cluster. Failing with the below error.
Node pool version cannot be set to 1.14.6-gke.1 when releaseChannel REGULAR is set.
Any advice on how i can get around this?
EDIT: I finally managed to create a new pool but only after my master was auto-updated. looks like for auto-updated clusters this is a limitation. the new node being created seems to default to the version of the master and if the master is on a deprecated version and is pending auto upgrade, all one can do it wait.
That version was removed from GKE yesterday: https://cloud.google.com/kubernetes-engine/docs/release-notes#version_updates
The following versions are no longer available for new clusters or upgrades.
1.13.7-gke.24
1.13.9-gke.3
1.13.9-gke.11
1.13.10-gke.0
1.13.10-gke.7
1.14.6-gke.1
1.14.6-gke.2
1.14.6-gke.13
It seems you have enrolled the cluster in a REGULAR release channel and you can not currently disable[1] the release channel to do manual upgrades. You need to wait for the auto upgrade as described in the release notes[2].
To stop using release channels and go back to specifying an exact version, you must recreate the cluster without the --release-channel flag.
[1]-https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels#changing_and_disabling_release_channels
[2]-https://cloud.google.com/kubernetes-engine/docs/release-notes-regular#october_30_2019
When you're using a release channel the web cloud console does not let you select a version when creating a node pool, but the API/CLI does.
I'm in the same situation as you: the release channel version that my master is on was revoked, but I was able to add a new node pool with a previous version set in terraform.

MongoDB NonDocker and Docker Nodes

I have a 5 node MongoDB cluster installed non-Dockerized. I want to start adding nodes to this cluster but I want to use the Dockerized MongoDB (i.e. end result is to migrate Dockerized into the replica set and decommission the non-Dockerized nodes.)
When I do this, I am currently getting my added nodes stuck in STARTUP status so from my understanding the config files are not able to sync up.
Is there something that I need to do to prepare the cluster for the new nodes or is there some logs that I can delve into to find out why it is not moving to STARTUP2?
The data directory was not large enough thus the config files were unable to sync. As soon as I grew the data directory - all was well.

Orientdb, How create database in distribute mode

I am new to orientdb. I use orientdb verson 2.1.11.
I config and deployed five nodes on the same machine in distribute mode. I use console to create a database, command is (port 2425 is the second node):
create database remote:192.168.12.37:2425/fuwu_test root 1234 plocal graph
Every node created the database "fuwu_test", but the cluster not create synchronous relationship.
I see the studio that every cluster has one cluster not five. I create one class Person, the class also not syncronized to other nodes.
Why it does't work, how to create a new datebase in running a cluster. Do I need to restart the whole nodes ?
thanks a lot
There is a known issue on this in v2.1 and v2.2 releases. The workaround is creating the database before to go in cluster. Anyway it will be resolved soon, sorry.

What is the recommended way to upgrade a kubernetes cluster as new versions are released?

What is the recommended way to upgrade a kubernetes cluster as new versions are released?
I heard here it may be https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-push.sh. If that is the case how does kube-push.sh relate to https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/gce/upgrade.sh?
I've also heard here that we should instead create a new cluster, copy/move the pods, replication controllers, and services from the first cluster to the new one and then turn off the first cluster.
I'm running my cluster on aws if that is relevant.
The second script you reference (gce/upgrade.sh) only works if your cluster is running on GCE. There isn't (yet) an equivalent script for AWS, but you could look at the script and follow the steps (or write them into a script) to get the same behavior.
The main different between upgrade.sh and kube-push.sh is that the former does a replacement upgrade (remove a node, create a new node to replace it) whereas the later does an "in place" upgrade.
Removing and replacing the master node only works if the persistent data (etcd database, server certificates, authorized bearer tokens, etc) reside on a persistent disk separate from the boot disk of the master (this is how it is configured by default in GCE). Remove and replacing nodes should be fine on AWS (but keep in mind that any pods not under a replication controller won't be restarted).
Doing an in-place upgrade doesn't require any special configuration, but that code path isn't as thoroughly tested as the remove and replace option.
You shouldn't need to entirely replace your cluster when upgrading to a new version, unless you are using pre-release versions (e.g. alpha or beta releases) which can sometimes have breaking changes between them.