Restore an etcd cluster in a Helm Chart - kubernetes

I want to deploy a etcd cluster using a Helm Chart with the possibility to restore in case of a cluster-level failure.
https://github.com/helm/charts/tree/master/incubator/etcd
Deployment using the following Helm Chart works fine - I can use it with success as a storage backend for my application, the problem is - I can't restore it from a snapshot. Tried to restore it, but every time a spin up a new cluster, exec into it and try to restore the snapshot (copied into the pod), the data is not restored. Or in other words - I see that the data folder is re-created inside the pod, but still, no data is inserted, looks like etcd is still using the "default.etcd" data folder, which is mounted as the default PVC, instead of a "restored.etcd".
Any one had similar issues with restoring an etcd helm chart data on a k8s cluster? Any tips?

Related

How to back up the cluster with ETCDCTL from a regular node inteas of a master in kubernetes

There is no clear information about how to make a backup and restore from a regular node like node01 for instance, I mean:
Operating etcd clusters for Kubernetes shows information like how to use it and
ETCD - backup and restore management shows some of the necessary steps.
But how about in the cert exam, you are operating most of the time from a regular node01, the config files are not the same? Can some one elaborate?
Thanks
It is impossible to backup cluster from a regular node using etcd. The etcd can only be run on a master node.
But you can backup your Kubernetes cluster by command: etcdctl backup. Here you can find completely guide, how to use etcdctl backup command.
Another way is making a snapshot of your cluster by command: etcdctl snapshot save.
This command will let you create incremental backup.
Incremental backup of etcd, where full snapshot is taken first and then we apply watch and persist the logs accumulated over certain period to snapshot store. Restore process, restores from the full snapshot, start the embedded etcd and apply the logged events one by one.
You can find more about incremental backup function here.

Unable to make elasticsearch as persistent volume in kubernetes cluster

I want to setup Elasticsearch on Kubernetes Cluster using Helm. I can setup Elasticsearch on Kubernetes Cluster without persistence. I am using below helm chart.
helm install --name elasticsearch incubator/elasticsearch \
--set master.persistence.enabled=false \
--set data.persistence.enabled=false \
--set image.tag=6.4.2 \
--namespace logging
However, i am not able to use it with Persistence. Moreover i am confused as i am using neither cloud based storage(aws,gce) nor nfs. I am using Local VM storage.
I added disk in my VM environment formated it under ext4. And now i am trying to use it as a persistent disk for my elasticsearch Deployment.
I tried lots of ways, not working much.
For any data if you need i would be helpful to provide.
But kindly get a solution which will work.
I just need help..
I don't believe this chart will support local storage.
Looking at the volumeClaimTemplate such as on the master-statefulset.yaml shows that it's missing key parameters for a local volume setup (such as path, nodeAffinity, volumeBindingMode) described here. If you are using a cloud deployment, just use a cloud volume claim. If you have deployed a cluster on a on-prem or just onto your computer, then you should fork the chart and adjust the volume claims to meet the requirements for local storage.
Either way on your future posts you should include relevant logs. With kubernetes errors it's helpful to see from all parts of the stack such as: kubernetes control plane logs, object events (like the output from describing the volume claim), helm logs, elasticsearch pod logs failing to discover a volume, etc etc.

Persistence of Configmap in kubernetes

I have a Kubernetes pod (let's call it POD-A) and I want it to use a certain config file to perform some actions using k8s API. The config file will be a YAML or JSON which will be parsed by the application inside the pod.
The config file is hosted by an application server on cloud and the latest version of it can be pulled based on a trigger. The config file contains configuration details of all the deployments in the k8s cluster and will be used to update deployments using k8s API in POD-A.
Now what I am thinking is to save this config file in a config-map and every time a new config file is pulled a new config-map is created by the pod which is using the k8s API.
What I want to do is to update the previous config map with a certain flag (a key and a value) which will basically help the application to know which is the current version of deployment. So let's say I have a running k8s cluster with multiple pods in it, a config-map is there which has all the configuration details against those pods (image version, namespace, etc.) and a flag notifying that this the current deployment and the application inside POD-A will know that by loading the config-map. Now when a new config-file is pulled a new config-map is created and the flag for current deployment is set to false for the previous config map and is set to true for the latest created config map. Then that config map is used to update all the pods in the cluster.
I know there are a lot of details but I had to explain them to ask the following questions:
1) Can configmaps be used for this purpose?
2) Can I update configmaps or do I have to rewrite them completely? I am thinking of writing a file in the configmap because that would be much simpler.
3) I know configmaps are stored in etcd but are they persisted on disk or are kept in memory?
4) Let's say POD-A goes down will it have any effect on the configmaps? Are they in any way associated with the life cycle of a pod?
5) If the k8s cluster itself goes down what happens to the `configmaps? Since they are in etcd and if they are persisted then will they be available again?
Note: There is also a limit on the size of configmaps so I have to keep that in mind. Although I am guessing 1MB is a fair enough size to save a config file since it would usually be in a few bytes.
1) I think you should not use it in this way.
2) ConfigMaps are kubernetes resources. You can update them.
3) If etcd backups to disk are enabled.
4) No. A pod's lifecycle should not affect configmaps, unless pod mutates(deletes) the configmap.
5) If the cluster itself goes down. Assuming etcd is also running on the same cluster, etcd will not be available till the cluster comes back up again. ETCD has an option to persist backups to disk. If this is enabled, when the etcd comes back up, it will have restored the values that were on the backup. So it should be available once the cluster & etcd is up.
There are multiple ways to mount configMap in a pod like env variables, file etc.
If you change a config map, Values won't be updated on configMaps as files. Only values for configMaps as env variables are update dynamically. And now the process running in the pod should detect env variable has been updated and take some action.
So I think the system will be too complex.
Instead trigger a deployment that kills the old pods and brings up a new pod which uses the updated configMaps.

Kubernetes: Configuration snapshoting

Is there any configuration snapshot mechanism on kubernetes?
The goal is to take a snapshot of all deployments/services/config-maps etc and apply them to a kubernetes cluster.
The steps that should be taken.
Take a configuration snapshot
Delete the cluster
Create a new cluster
Apply the configuration snapshot to the new cluster
New cluster works like the old one
These are the 3 that spring to mind, with kubed being, at least according to their readme, the closest to your stated goals:
Ark
kubed
kube-backup
I run Ark in my cluster, but (to my discredit) I have not yet attempted to do a D.R. drill using it; I only checked that it is, in fact, making config backups.
State of the kubernetes is stored on etcd, so back up etcd data and restore would be able to restore cluster. But this would not backup any information stored in persistent volumes, that needs to be handled separately.
backup operater provided by coreos is a good option:
https://coreos.com/operators/etcd/docs/latest/user/walkthrough/backup-operator.html
Taking backups with etcdctl :
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/
https://github.com/coreos/etcd/blob/master/etcdctl/README.md
Heptio ark has capability to backup config and also volumes :
https://github.com/heptio/ark
if you want a UI based option, these would be good :
https://github.com/kaptaind/kaptaind
https://github.com/mhausenblas/reshifter

How to update kubernetes cluster

I am working with Kube-Aws by coreos to generate a cloud formation script and deploy it as part of my stack,
I would like to upgrade my kubernetes cluster to a newer version.
I don't mind creating a new cluster, but what I do mind is recreating all the deployments/services etc...
Is there any way to take the configuration and replace/transfer them to the new cluster? maybe copy the entire etcd data? will that help?
Use kubectl get --export=true on all the resources that you want to move into a new cluster and then restore them that way.
kubectl get <pods,services,deployments,whatever> --export=true --all-namespaces=true