I'm learning Kubernetes and just came across an issue and would like to check if anyone else has come across it,
user#ubuntu:~/rc$ kubectl get rs ### don’t see any replica set
user#ubuntu:~/rc$
user#ubuntu:~/rc$
user#ubuntu:~/rc$ kubectl get pod
NAME READY STATUS RESTARTS AGE
bigwebstuff-673k9 1/1 Running 0 7m
bigwebstuff-cs7i3 1/1 Running 0 7m
bigwebstuff-egbqd 1/1 Running 0 7m
user#ubuntu:~/rc$
user#ubuntu:~/rc$
user#ubuntu:~/rc$ kubectl delete pod bigwebstuff-673k9 bigwebstuff-cs7i3 #### delete pods
pod "bigwebstuff-673k9" deleted
pod "bigwebstuff-cs7i3" deleted
user#ubuntu:~/rc$
user#ubuntu:~/rc$ kubectl get pod #### the deleted pods regenerated
NAME READY STATUS RESTARTS AGE
bigwebstuff-910m9 1/1 Running 0 6s
bigwebstuff-egbqd 1/1 Running 0 8m
bigwebstuff-fksf6 1/1 Running 0 6s
You see the deleted pods are regenerated, though I can’t find the replica set, as if a hidden replicate set exist somewhere.
The 3 pods are started from rc.yaml file as follows,
user#ubuntu:~/rc$ cat webrc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: bigwebstuff
labels:
name: bigwebstuff
spec:
replicas: 3
selector:
run: testweb
template:
metadata:
labels:
run: testweb
spec:
containers:
- name: podweb
image: nginx
ports:
- containerPort: 80
But it didn’t show up after I use the yams file to create the pods.
Any idea on how to find the hidden replica set? Or why the pods gets regenerated?
A "ReplicaSet" is not the same thing as a "ReplicationController" (although they are similar). The kubectl get rs command lists replica sets, whereas the manifest file in your question creates a replication controller. Instead, use the kubectl get rc command to list replication controllers (or alternatively, change your manifest file to create a ReplicaSet instead of a ReplicationController).
On the difference between ReplicaSets and ReplicationControllers, let me quote the documentation:
Replica Set is the next-generation Replication Controller. The only difference between a Replica Set and a Replication Controller right now is the selector support. Replica Set supports the new set-based selector requirements as described in the labels user guide whereas a Replication Controller only supports equality-based selector requirements.
Replica sets and replication controllers are not the same thing. Try the following:
kubectl get rc
And then delete accordingly.
Related
So I wish to limit resources used by pod running for each of my namespace, and therefor want to use resource quota.
I am following this tutorial.
It works well, but I wish something a little different.
When trying to schedule a pod which will go over the limit of my quota, I am getting a 403 error.
What I wish is the request to be scheduled, but waiting in a pending state until one of the other pod end and free some resources.
Any advice?
Instead of using straight pod definitions (kind: Pod) use deployment.
Why?
Pods in Kubernetes are designed as relatively ephemeral, disposable entities:
You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a controller), the new Pod is scheduled to run on a Node in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is evicted for lack of resources, or the node fails.
Kubernetes assumes that for managing pods you should a workload resources instead of creating pods directly:
Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:
Deployment
StatefulSet
DaemonSet
By using deployment you will get very similar behaviour to the one you want.
Example below:
Let's suppose that I created pod quota for a custom namespace, set to "2" as in this example and I have two pods running in this namespace:
kubectl get pods -n quota-demo
NAME READY STATUS RESTARTS AGE
quota-demo-1 1/1 Running 0 75s
quota-demo-2 1/1 Running 0 6s
Third pod definition:
apiVersion: v1
kind: Pod
metadata:
name: quota-demo-3
spec:
containers:
- name: quota-demo-3
image: nginx
ports:
- containerPort: 80
Now I will try to apply this third pod in this namespace:
kubectl apply -f pod.yaml -n quota-demo
Error from server (Forbidden): error when creating "pod.yaml": pods "quota-demo-3" is forbidden: exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2
Not working as expected.
Now I will change pod definition into deployment definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: quota-demo-3-deployment
labels:
app: quota-demo-3
spec:
selector:
matchLabels:
app: quota-demo-3
template:
metadata:
labels:
app: quota-demo-3
spec:
containers:
- name: quota-demo-3
image: nginx
ports:
- containerPort: 80
I will apply this deployment:
kubectl apply -f deployment-v3.yaml -n quota-demo
deployment.apps/quota-demo-3-deployment created
Deployment is created successfully, but there is no new pod, Let's check this deployment:
kubectl get deploy -n quota-demo
NAME READY UP-TO-DATE AVAILABLE AGE
quota-demo-3-deployment 0/1 0 0 12s
We can see that a pod quota is working, deployment is monitoring resources and waiting for the possibility to create a new pod.
Let's now delete one of the pod and check deployment again:
kubectl delete pod quota-demo-2 -n quota-demo
pod "quota-demo-2" deleted
kubectl get deploy -n quota-demo
NAME READY UP-TO-DATE AVAILABLE AGE
quota-demo-3-deployment 1/1 1 1 2m50s
The pod from the deployment is created automatically after deletion of the pod:
kubectl get pods -n quota-demo
NAME READY STATUS RESTARTS AGE
quota-demo-1 1/1 Running 0 5m51s
quota-demo-3-deployment-7fd6ddcb69-nfmdj 1/1 Running 0 29s
It works the same way for memory and CPU quotas for namespace - when the resources are free, deployment will automatically create new pods.
We run the following command in k8s
kubectl delete deployment ${our-deployment-name}
And this seems to delete the deployment called our-deployment-name fine. However we also want to delete the replicasets and pods that below to 'our-deployment-name'.
Reading the documents it is not clear if the default behaviour should cascade delete replicasets and pods. Does anybody know how do delete the deployment and all related replicasets and pods? Or do I have to manually delete all of those resources as well?
When I delete a deployment I have an orphaned replicaset like this...
dev#jenkins:~$ kubectl describe replicaset.apps/wc-892-74697d58d9
Name: wc-892-74697d58d9
Namespace: default
Selector: app=wc-892,pod-template-hash=74697d58d9
Labels: app=wc-892
pod-template-hash=74697d58d9
Annotations: deployment.kubernetes.io/desired-replicas: 1
deployment.kubernetes.io/max-replicas: 2
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/wc-892
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=wc-892
pod-template-hash=74697d58d9
Containers:
wc-892:
Image: registry.digitalocean.com/galatea/wastecoordinator-wc-892:1
Port: 8080/TCP
Host Port: 0/TCP
Limits:
memory: 800Mi
Environment: <none>
Mounts: <none>
Volumes: <none>
Priority Class Name: dev-lower-priority
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 11m replicaset-controller Created pod: wc-892-74697d58d9-jtj9t
dev#jenkins:~$
As you can see in the replicaset Controlled By: Deployment/wc-892 which means deleting the deployment wc-892 should delete the replicaset which would in turn delete the pods with label app=wc-892
First get the deployments which you want to delete
kubectl get deployments
and delete the deployment which wou want
kubectl delete deployment yourdeploymentname
This will delete the replicaset and pods associted with it.
kubectl delete deployment <deployment> will delete all ReplicaSets associated with the deployment AND the active pods associated with those ReplicaSets.
The controller-manager or API Server might be having issue handling the delete request. So I'd advise looking at those logs to verify.
Note, it's possible the older replicasets are attached to something else in the namespace? Try listing and look at the metadata. Using kubectl describe rs <rs> or kubectl get rs -o yaml
I just want to find out if I understood the documentation right:
Suppose I have an nginx server configured with a Deployment, version 1.7.9 with 4 replicas.
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Now I update the image to version 1.9.1:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Withkubectl get pods I see the following:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-2100875782-c4fwg 1/1 Running 0 3s
nginx-2100875782-vp23q 1/1 Running 0 3s
nginx-390780338-bl97b 1/1 Terminating 0 17s
nginx-390780338-kq4fl 1/1 Running 0 17s
nginx-390780338-rx7sz 1/1 Running 0 17s
nginx-390780338-wx0sf 1/1 Running 0 17s
2 new instances (c4fwg, vp23q) of 1.9.1 have been started coexising for a while with 3 instances of the 1.7.9 version.
What happens to the request made to the service at this moment? Do all request go to the old pods until all the new ones are available? Or are the requests load balanced between the new and the old pods?
In the last case, is there a way to modify this behaviour and ensure that all traffic goes to the old versions until all new pods are started?
The answer to "what happens to the request" is that they will be round-robin-ed across all Pods that match the selector within the Service, so yes, they will all receive traffic. I believe kubernetes considers this to be a feature, not a bug.
The answer about the traffic going to the old Pods can be answered in two ways: perhaps Deployments are not suitable for your style of rolling out new Pods, since that is the way they operate. The other answer is that you can update the Pod selector inside the Service to more accurately describe "this Service is for Pods 1.7.9", which will pin that Service to the "old" pods, and then after even just one of the 1.9.1 Pods has been started and is Ready, you can update the selector to say "this Service is for Pods 1.9.1"
If you find all this to be too much manual labor, there are a whole bunch of intermediary traffic managers that have more fine-grained control than just using pod selectors, or you can consider a formal rollout product such as Spinnaker that will automate what I just described (presuming, of course, you can get Spinnaker to work; I wish you luck with it)
The following is the file used to create the Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kloud-php7
namespace: kloud-hosting
spec:
replicas: 1
template:
metadata:
labels:
app: kloud-php7
spec:
containers:
- name: kloud-php7
image: 192.168.1.1:5000/kloud-php7
- name: kloud-nginx
image: 192.168.1.1:5000/kloud-nginx
ports:
- containerPort: 80
The Deployment and the Pod worked fine, but after deleting the Deployment and a generated ReplicaSet, the I cannot delete the spawn Pods permanently. New Pods will be created if old ones are deleted.
The kubernetes cluster is created with kargo, containing 4 nodes running CentOS 7.3, kubernetes version 1.5.6
Any idea how to solve this problem ?
This is working as intended. The Deployment creates (and recreates) a ReplicaSet and the ReplicaSet creates (and recreates!) Pods. You need to delete the Deployment, not the Pods or the ReplicaSet:
kubectl delete deploy -n kloud-hosting kloud-php7
This is Because the replication set always enables to recreate the pods as mentioned in the deployment file(suppose say 3 ..kube always make sure that 3 pods up and running)
so here we need to delete replication set first to get rid of pods.
kubectl get rs
and delete the replication set .this will in turn deletes the pods
It could be the deamonsets need to be deleted.
For example:
$ kubectl get DaemonSets
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
elasticsearch-operator-sysctl 5 5 5 5 5 <none> 6d
$ kubectl delete daemonsets elasticsearch-operator-sysctl
Now running get pods should not list elasticsearch* pods.
I went through both daemonset doesn't create any pods and DaemonSet doesn't create any pods: v1.1.2 before asking this question. Here is my problem.
Kubernetes cluster is running on CoreOS
NAME=CoreOS
ID=coreos
VERSION=1185.3.0
VERSION_ID=1185.3.0
BUILD_ID=2016-11-01-0605
PRETTY_NAME="CoreOS 1185.3.0 (MoreOS)"
ANSI_COLOR="1;32"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://github.com/coreos/bugs/issues"
I refer to https://coreos.com/kubernetes/docs/latest/getting-started.html guide and created 3 etcd, 2 masters and 42 nodes. All applications running in the cluster without issue.
I got a requirement of setting up logging with fluentd-elasticsearch and downloaded yaml files in https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch deployed fluentd deamonset.
kubectl create -f fluentd-es-ds.yaml
I could see it got created but none of pod created.
kubectl --namespace=kube-system get ds -o wide
NAME DESIRED CURRENT NODE-SELECTOR AGE CONTAINER(S) IMAGE(S) SELECTOR
fluentd-es-v1.22 0 0 alpha.kubernetes.io/fluentd-ds-ready=true 4h fluentd-es gcr.io/google_containers/fluentd-elasticsearch:1.22 k8s-app=fluentd-es,kubernetes.io/cluster-service=true,version=v1.22
kubectl --namespace=kube-system describe ds fluentd-es-v1.22
Name: fluentd-es-v1.22
Image(s): gcr.io/google_containers/fluentd-elasticsearch:1.22
Selector: k8s-app=fluentd-es,kubernetes.io/cluster-service=true,version=v1.22
Node-Selector: alpha.kubernetes.io/fluentd-ds-ready=true
Labels: k8s-app=fluentd-es
kubernetes.io/cluster-service=true
version=v1.22
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.
I verified below details according to the comments in above SO questions.
kubectl api-versions
apps/v1alpha1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1beta1
autoscaling/v1
batch/v1
batch/v2alpha1
certificates.k8s.io/v1alpha1
extensions/v1beta1
policy/v1alpha1
rbac.authorization.k8s.io/v1alpha1
storage.k8s.io/v1beta1
v1
I could see below logs in one kube-controller-manager after restart.
I0116 20:48:25.367335 1 controllermanager.go:326] Starting extensions/v1beta1 apis
I0116 20:48:25.367368 1 controllermanager.go:328] Starting horizontal pod controller.
I0116 20:48:25.367795 1 controllermanager.go:343] Starting daemon set controller
I0116 20:48:25.367969 1 horizontal.go:127] Starting HPA Controller
I0116 20:48:25.369795 1 controllermanager.go:350] Starting job controller
I0116 20:48:25.370106 1 daemoncontroller.go:236] Starting Daemon Sets controller manager
I0116 20:48:25.371637 1 controllermanager.go:357] Starting deployment controller
I0116 20:48:25.374243 1 controllermanager.go:364] Starting ReplicaSet controller
The other one has below log.
I0116 23:16:23.033707 1 leaderelection.go:295] lock is held by {master.host.name} and has not yet expired
Am I missing something? Appreciate your help on figure out the issue.
I found the solution after studying https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml
There is nodeSelector: set as alpha.kubernetes.io/fluentd-ds-ready: "true"
But nodes doesn't have a label like that. What I did is add the label as below to one node to check whether it's working.
kubectl label nodes {node_name} alpha.kubernetes.io/fluentd-ds-ready="true"
After that, I could see fluentd pod started to run
kubectl --namespace=kube-system get pods
NAME READY STATUS RESTARTS AGE
fluentd-es-v1.22-x1rid 1/1 Running 0 6m
Thanks.