As of Kubernetes 1.2, kube-proxy is now a pod running in the kube-system namespace.
The old init script /etc/init.d/kube-proxy has been removed.
Aside from simply resetting the GCE instance, is there a good way to restart kube-proxy?
I just added an annotation to change the proxy mode, and I need to restart kube-proxy for my change to take effect.
The kube-proxy is run as an addon pod, meaning the Kubelet will automatically restart it if it goes away. This means you can restart the kube-proxy pod by simply deleting it:
$ kubectl delete pod --namespace=kube-system kube-proxy-${NODE_NAME}
Where $NODE_NAME is the node you want to restart the proxy on (this is assuming a default configuration, otherwise kubectl get pods --kube-system should include the list of kube-proxy pods).
If the restarted kube-proxy is missing your annotation change, you may need to update the manifest file, usually found in /etc/kubernetes/manifests on the node.
Related
I have a coredns pod running in kube-system namespace. I need to restart the coredns pod without downtime. I am aware that we can do delete coredns pod using below command and new coredns pod will spin up automatically.
kubectl delete pods -n kube-system -l k8s-app=kube-dns
But it is creating downtime. So I am expecting to do a restart of this coredns pod without downtime.
Usecase: I have set TTL as 1hr i.e, cache 3600. I have an external dns server where I will forward any requests to that external dns server using forward plugin. And whenever I need to get recent changes in external dns entries before TTL gets expired, I think we need to restart the coredns pod. Is there any other way to achieve this? If restart is the only way, how can I do it without downtime? It would be really helpful when someone helps me on this. Thanks in advance!
Normally, the result of this command kubectl get deployment coredns --namespace kube-system --output jsonpath='{.spec.strategy.rollingUpdate.maxUnavailable}' will return 1; means for deployment of 2 pods (typical coredns setup), pod will be replace 1 at a time, leaving the other one serving request. In this case, you can run kubectl rollout restart deployment coredns --namespace kube-system to restart without downtime, no explicitly delete or hike coredns pod count required.
I accidentally deleted kube-proxy daemonset by using command: kubectl delete -n kube-system daemonset kube-proxy which should run kube-proxy pods in my cluster, what the best way to restore it?
That's how it should look
Kubernetes allows you to reinstall kube-proxy by running the following command which install the kube-proxy addon components via the API server.
$ kubeadm init phase addon kube-proxy --kubeconfig ~/.kube/config --apiserver-advertise-address string
This will generate the output as
[addons] Applied essential addon: kube-proxy
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
Hence kube-proxy will be reinstalled in the cluster by creating a DaemonSet and launching the pods.
kube-proxy daemon got created at the time of cluster creation, so you need to write your own manifest for daemon-set unless you have a backup to restore it from there.
k8s version: 1.12.1
I created pod with api on node and allocated an IP (through flanneld). When I used the kubectl describe pod command, I could not get the pod IP, and there was no such IP in etcd storage.
It was only a few minutes later that the IP could be obtained, and then kubectl get pod STATUS was Running.
Has anyone ever encountered this problem?
Like MatthiasSommer mentioned in comment, process of creating pod might take a while.
If POD will stay for a longer time in ContainerCreating status you can check what is stopping it change to status Running by command:
kubectl describe pod <pod_name>
Why creating of pod may take a longer time?
Depends on what is included in manifest, pod can share namespace, storage volumes, secrets, assignin resources, configmaps etc.
kube-apiserver validates and configures data for api objects.
kube-scheduler needs to check and collect resurces requrements, constraints, etc and assign pod to the node.
kubelet is running on each node and is ensures that all containers fulfill pod specification and are healty.
kube-proxy is also running on each node and it is responsible for network on pod.
As you see there are many requests, validates, syncs and it need a while to create pod fulfill all requirements.
I've been trying to shut down kubernetes cluster , but I couldn't managed to do it.
When I type
kubectl cluster-info
I can see that my cluster is still running.
I tried commands like running script
kube-down.sh
but it didn't work.
I deleted all pods. How can I shut it down ?
The tear down section of the official documentation says:
To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down.
Talking to the master with the appropriate credentials, run:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
Then, on the node being removed, reset all kubeadm installed state:
kubeadm reset
You cannot use kubectl stop command as it has been deprecated. If you have created pods using a yaml file, I suggest you use
kubectl delete -f <filename>.yml to stop any running pod.
You can also delete service associated with running pods by using the following command:
# Delete pods and services with same names "baz" and "foo"
kubectl delete pod,service baz foo
When using kube-down.sh you've to make sure that all the environment variables which were adjusted for the kube-up.sh are also used during the shut down. See also
I have spun up a Kubernetes cluster in AWS using the official "kube-up" mechanism. By default, an addon that monitors the cluster and logs to InfluxDB is created. It has been noted in this post that InfluxDB quickly fills up disk space on nodes, and I am seeing this same issue.
The problem is, when I try to kill the InfluxDB replication controller and service, it "magically" comes back after a time. I do this:
kubectl delete rc --namespace=kube-system monitoring-influx-grafana-v1
kubectl delete service --namespace=kube-system monitoring-influxdb
kubectl delete service --namespace=kube-system monitoring-grafana
Then if I say:
kubectl get pods --namespace=kube-system
I do not see the pods running anymore. However after some amount of time (minutes to hours), the replication controllers, services, and pods are back. I don't know what is restarting them. I would like to kill them permanently.
You probably need to remove the manifest files for influxdb from the /etc/kubernetes/addons/ directory on your "master" host. Many of the kube-up.sh implementations use a service (usually at /etc/kubernetes/kube-master-addons.sh) that runs periodically and makes sure that all the manifests in /etc/kubernetes/addons/ are active.
You can also restart your cluster, but run export ENABLE_CLUSTER_MONITORING=none before running kube-up.sh. You can see other environment settings that impact the cluster kube-up.sh builds at cluster/aws/config-default.sh