Is there a way to rollback edited kubernetes manifest file? - kubernetes

Pod with some values was deployed, then I edited it kubectl edit pod <pod>, and now wanted to get back to the previous state (no longer have the values as someone else deployed it some time ago). Is it possible?
And second question.
If someone deployed to GKE cluster with helm, is it possible (even though I have access to cluster and can see all kubectl get all) that I don't see those deployments with helm list but see the kubernetes pods ? - rephrasing it. Is it possible someone deployed to cluster with helm and I only see pods, no helm config with helm list ?
ps: helm and kubernetes works fine with other clusters or minikube:
helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:51:44Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}```

Pods does not have rollback feature that's why you should use deployment which provides rollback functionality. Also good practice for production is to version control your yamls for easy rollback and audit.

Related

Grafana showing k8s pods down for a minute

while using grafana for monitoring with Prometheus, we saw that sometimes grafana showed no pods for a service but when I checked in the cluster, all pods are running without any issue.
This issue is not continuous. Now I have to find out why grafana is alerting? But I don't know where to start.
Pls, ask if any info needed and pls show me the path, where I can start investigating.
Other info
This cluster is AWS EKS. Using prometheus:v2.22.1. Deployment of Prometheus & EKS cluster is done by Terraform.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.20-eks-8c49e2", GitCommit:"8c49e2efc3cfbb7788a58025e679787daed22018", GitTreeState:"clean", BuildDate:"2021-10-17T05:13:46Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.18) exceeds the supported minor version skew of +/-1

kubernetes : Is is possible to install nginx ingress controller on V1.10 cluster

I have a v.1.10 kubernetes cluster.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
I would like to install an inginx ingress controller for this cluster.
I followed the instruction : here
But, I keep having errors such as :
$ kubectl apply -f common/ingress-class.yaml
error: unable to recognize "common/ingress-class.yaml": no matches for kind "IngressClass" in version "networking.k8s.io/v1beta1"
I checked, and there is indeed no IngressClass resource for my kubernetes version.
There are more errors as I continue the installation.
My question is :
Is there a document that describes the installation for old kubernetes versions?
NB. I installed my cluster manually (didnt use minikube, kubespary, ...)
Thanks in advance
error: unable to recognize "common/ingress-class.yaml": no matches for kind "IngressClass" in version "networking.k8s.io/v1beta1"
kind IngressClass in version networking.k8s.io/v1beta1 was introduced much later then your version - in v1.18
You can find appropriate nginx version like i described below or, alternatively, you can upgrade you cluster to newer version and then use up to date nginx ingress.
I think you can use old Ingress Nginx versions or old NGINX Ingress Controller versions.
For example NGINX Ingress Controller 1.3.2:
Installing the Ingress Controller
examples
source code file on page available to download, that contains all necessary config files for deployment
btw, you can also check NGINX Ingress Controller Helm Chart and install nginx using helm. For that i think you ll also need upgrade your cluster.

Kubectl Unable to describe on HPA

When I'm trying to describe on hpa following error is thrown:
kubectl describe hpa go-auth
Error from server (NotFound): the server could not find the requested resource
My kubectl version is :
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.7-gke.7", GitCommit:"b80664a77d3bce5b4701bc881d972b1a702290bf", GitTreeState:"clean", BuildDate:"2019-04-04T03:12:09Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"}
Beware of kubectl version skew. Running kubectl v1.14 with kube-apiserver v1.12 is not supported.
As per kubectl docs:
You must use a kubectl version that is within one minor version
difference of your cluster. For example, a v1.2 client should work
with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl
helps avoid unforeseen issues.
Give it another try using kubectl v1.12.x and you probably will get rid of this problem. Also, take a look at the #568 issue (especially this comment), which addresses the same problem that you have.
If you are wondering on how to manage multiple kubectl versions, I recommend this read: Using different kubectl versions with multiple Kubernetes clusters.

The kubernetes "AVAILABLE" column indicates "0", but the former steps(in Kubernetes guide) are OK

I need to deploy some docker images, and manage them with the Kubernetes.
I followed the tutorial"Interactive Tutorial - Deploying an App"(https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/).
But after I typing the command kuberctl get deployments, in the result table, the deployment column shows 0 instead of 1, it's confusing me.
If there is anyone kindly guides me what's going wrong and what shall I do?
The OS is Ubuntu16.04;
The kuberctl version command shows the server and client version informations well.
The docker image is tagged already(a mysql:5.7 image).
devserver:~$ kubectl version    
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}  
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
devserver:~$ kubectl get deployments
NAME  DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
ap-mysql   1    1    1       0     1
hello-node  1    1    1       0     1
I expect the answer about the phenomenon and the resolution. And I need to deploy my image on the minikube.
Katacoda uses hosted VM's so sometimes it may be slow to respond to the terminal input.
To verify if any deployment is present you may run kubectl get deployments --all-namespaces.To see what's going on with your deployment you can run kubectl describe DEPLOYMENT_NAME -n NAMESPACE.To inspect a pod you can do the same kubectl describe POD_NAME -n NAMESPACE.

Getting MountVolume.SetUp failed for volume while installing stable RabbitMQ in Kubernetes

I am getting below error while doing the installation of RabbitMQ through helm install.
MountVolume.SetUp failed for volume "config-volume" : couldn't
propagate object cache: timed out waiting for the condition
Below is the details of kubectl version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Pl
atform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Pla
tform:"linux/amd64"}
And below is the command I used to install stable rabbitmq.
helm install --name coa-rabbitmq --set rabbitmq.username=#Username#,rabbitmq.password=#Password#,rabbitmq.erlangCookie=#Cookie#,livenessProbe.periodSeconds=120,readinessProbe.periodSeconds=120 stable/rabbitmq
Any help will be appreciated.
Thanks in advance.
This works fine for me. Looks like it's an issue related to this in this case it can't mount the ConfigMap volume where the rabbitmq config is: the config-volume. It may also be the case that something is preventing mounting volumes on your nodes (process, file descriptor, etc).
You didn't specify where you are running this, but you can try bouncing your node components: kubelet, docker, and ultimately your node. Keep in mind that all other containers running on the node will restart somewhere again in the cluster.
Edit:
There was a mismatch between kubectl client, kubectl version and kubeadm version.