Zero-downtime upgrade for HashiCorp Nomad server or client - upgrade

What is the recommended methodology to upgrade a HashiCorp Nomad server or client on CentOS Linux 7.5 without downtime?
I'm trying to migrate from v0.10.4 to the just-released v0.11.
Is there a way to perform a lazy-upgrade that will defer/wait for existing tasks to end before swapping binaries to ensure zero downtime?

The official Nomad upgrade guide covers everything you need.
Basically the process consists of the following steps
Replace an old Nomad binary with a new one
Restart Nomad process
I've just tested it on one of my staging servers and it worked like a charm. Docker containers have not been restarted during the Nomad update process.

Related

kOps: Should i upgrade node ami images when upgrading kubernetes cluster to a new version?

I am using kOps to perform a manual cluster upgrade (from 1.17 to 1.18) as explained at https://kops.sigs.k8s.io/operations/updates_and_upgrades/#upgrading-kubernetes
I've noticed that kOps does not update the ami-image defined at spec.image at ig nodes, that means after cluster upgrade nodes are going to use the same base OS despite the kubernetes upgrade. But if you install 1.18 from scratch kOps uses the latest image available for that version.
should i update the version and configure it the same as the one kOps would use in case of an installation from scratch?
In 1.18 ami has moved from Debian to Ubuntu, should i take any precautions due to the change of operating system?
if you edit the manifests directly and do "kops update" etc ... then you need to also update the images, another alternative is to let kops do it for you by running "kops upgrade cluster " it will update the remote state and set the correct defaults etc ..
regarding the image change, i don't see any major issues there, what you can do is grab the current ami and do "sort of rollbacks" by replacing the image and updating the cluster ( or applying previous version of the manifest assuming you have s3 revisions on the state )
There was a bug up until kOps 1.18.2 where Ubuntu images were considered "custom" and therefore not upgraded by kops upgrade. See this bug
As of 1.18.2, you should see upgrades for Ubuntu as well.
There is no particular need to take any precaution when switching from Debian to Ubuntu unless you are using kOps hooks that would be Debian. kOps will take care of this change for you.

MarkLogic Upgrade and steps

Current version 9.0.7.0
Upgrade version 9.0.11.0
When we looked at how to upgrade, we found below link
ML Knowledgebase
This document is of April 2018.
So i would like to know if we have to follow any additional steps, configuration, process?
Upgrading from Release 9.0-1 or Later
To upgrade from release 9.0-1 or later to the current MarkLogic 10 release (for example, if you are installing a maintenance release of MarkLogic 10), perform the following basic steps:
Stop MarkLogic Server (as described in step 1 of Removing MarkLogic).
Uninstall the old MarkLogic 9 release (as described in Removing MarkLogic).
2.1. If you want to uninstall MarkLogic 9.0-4 or later, and if the converters package was previously installed with it, you will have to perform a two-step uninstall: first uninstall MarkLogic Converters and then uninstall MarkLogic Server. For more detail, see MarkLogic Converters Installation Changes Starting at Release 9.0-4 and Removing MarkLogic.
Install the new MarkLogic 10 release (as described in Installing MarkLogic).
If you want to install MarkLogic 9.0-4 or later, and you plan to use the converters package with it, you will have to perform a two-step installation: first install MarkLogic Server and then install MarkLogic Converters. For more detail, see MarkLogic Converters Installation Changes Starting at Release 9.0-4 and Installing MarkLogic.
Start MarkLogic Server (as described in Starting MarkLogic Server).
Open the Admin Interface in a browser (http://localhost:8001/).
When the Admin Interface prompts you to upgrade the databases and the configuration files, click the button to confirm the upgrade.
If you are upgrading a cluster to a new release, see Upgrading a Cluster to a New Maintenance Release of MarkLogic Server in the Scalability, Availability, and Failover Guide. The Security database and the Schemas database must be on the same host, and that host should be the first host you upgrade when upgrading a cluster.
If you are upgrading two clusters that make use of database replication to replicate the Security database on the master cluster, then you must enter the following to manually upgrade the Security database configuration files on the machine that hosts the replica Security database:
http://host:8001/security-upgrade-go.xqy?force=true

Install kubernetes on debian stretch server without systemd

I am trying to install Kubernetes on Debian 9 (stretch) server, which is on cloud and therefore can't do virtualization. And it doesn't have systemd. Also, I'm trying for really minimal configuration, not big cluster.
I've found Minikube, https://docs.gitlab.com/charts/development/minikube/index.html which is supposed to run without virtualization using docker, but it requires systemd, as mentioned here https://github.com/kubernetes/minikube/issues/2704 (and yes I get the related error message).
I also found k3s, https://github.com/rancher/k3s which can run either on systemd or openrc, but when I install openrc using https://wiki.debian.org/OpenRC I don't have the "net" service it depends on.
Then I found microk8s, https://microk8s.io/ which needs systemd simply because snapd needs systemd.
Is there some other alternative or solution to mentioned problems? Or did Poettering already bribed everyone?
Since you are well off the beaten path, you can probably just run things by hand with k3s. It's a single executable AFAIK. See https://github.com/rancher/k3s#manual-download as a simple starting point. You will eventually want some kind of service monitor to restart things if they crash, if not systemd then perhaps Upstart (which is not packaged for Deb9) or Runit (which itself usually runs under supervision).

Upgrade path for service fabric from 5.7.198 to 6.0

Recently we started getting a message on the Azure portal that our SF version on the cluster we use will become unsupported (5.7.198). Which I interpret as that we need to upgrade to 6.0.
Has anyone done such an upgrade on a prod system with real customers and data that should be kept safe?
Is there an upgrade we should follow (i.e. go through intermediate versions)
Any issues that I should expect?
Thanks!

Kubernetes 1.2 baremetal production

Is it recommended to deploy Kubernetes 1.2 on a bare-metal Ubuntu/ RedHat production cluster? If so, what is the recommended SDN tool (flanneld or OvS), docker version and etcd version to use?
Here is the getting started guide for Ubuntu. It hasn't been updated since Kubernetes v1.1.8, but it should still be applicable for v1.2.4. That getting started guide uses flannel, but you can also use Calico (Guide). The list of Kubernetes getting started guides might be a good place to start.
docker version need to be 1.2+
you can found flannel/etcd version in the script of download-release.sh