How can I distribute the kube config file on worker nodes?
Only my master node has it (file under ~/.kube/config), and I'm not sure what's the proper way to programmatically copy the file over to the worker nodes, so that I can use kubectl on all nodes.
You can use the scp command in order to copy a file from one machine to another one.
run the following command from your master node for each worker node :
[user#k8s-master]$ scp ~/.kube/config username#k8s-worker1:~/.kube/config
It is not recommended that you have ~/.kube/config on the worker nodes. If a worker node is compromised due to a vulnerable pod, it could compromise the cluster using this config.
Thats why it is recommended to use a bastion host and use kube context.
However, for non-prod environments,
you can do something like this
kubectl get no --no-headers | egrep -v "master|controlplane" | awk '{print $1}' | while read line; do
scp -pr ~/.kube/ ${line}:~/.kube;
done
scp -pr will create the .kube directory if it doesn't exist on the worker nodes
Related
I am bit stuck with a problem I have.
I am struggling to set up the cni-plugin for k8. I was trying to install different cni-plugins that now I think many things are messed up.
Is there a way to neatly delete everything connected with a cni-plugin so that I can have a clean starting point? The goal is to avoid formatting my whole machine.
From this Stack Question.
steps to remove old calico configs from kubernetes without kubeadm reset:
clear ip route: ip route flush proto bird
remove all calico links in all nodes ip link list | grep cali | awk '{print $2}' | cut -c 1-15 | xargs -I {} ip link delete {}
remove ipip module modprobe -r ipip
remove calico configs rm /etc/cni/net.d/10-calico.conflist && rm /etc/cni/net.d/calico-kubeconfig
restart kubelet service kubelet restart
After those steps all the running pods won't be connect, then I have to delete all the pods, then all the pods works. This has litter influence if you are using replicaset
Or else you can use:
kubeadm reset command. this will un-configure the kubernetes cluster.
I would like to use kubectl cp to copy a file from a completed pod to my local host(local computer). I used kubectl cp /:/ , however, it gave me an error: cannot exec into a container in a completed pod; current phase is Succeeded error. Is there a way I can copy a file from a completed pod? It does not need to be kubectl cp. Any help appreciated!
Nope. If the pod is gone, it's gone for good. Only possibility would be if the data is stored in a PV or some other external resource. Pods are cattle, not pets.
You can find the files, because the containers of a pod in the state Completed are not deleted, they are just not running.
I am not aware of any way to do it via Kubernetes itself, but here is how to do it if your container runtime is Docker:
$ ssh <node where the pod is>
$ docker ps -a | grep <pod name>
$ docker cp <pod name>:/your/files ./
The files in containers are just overlayfs mounts; if the container still exists, the files still exist.
So if you are using containerd runtime or something else, look at /var/lib/containers or something (don't know where different runtimes do their overlayfs mounts, but it can't not be at the node. you could check if you find out where via $ mount).
I use k9s to access the bash from the pod where I keep the logs of my project.
Reading the logs with a cat is annoying, so I want to send them to my pc.
How can I do this?
You can use kubectl cp command.
kubectl cp default/<some-pod>:/logs/app.log app.log
I have a .pcap file on the master node, which I want to view in Wireshark on the local machine. I access the Kubernetes cluster via the Kops server by ssh from the local machine. I checked kubectl cp --help but provides a way to cp a file from remote pod to kops server.
If anyone knows how to bring a file from Master Node -> Kops Server -> Local machine, please share your knowledge! Thanks!
Solution is simple - scp, thanks to #OleMarkusWith's quick response.
All I did was:
On Kops Server:
scp admin#<master-node's-external-ip>:/path/to/file /dest/path
On local machine:
scp <kops-server-ip>:/path/to/file /dest/path
I have tried two different applications, both consisting of a web application frontend that needs to connect to a relational database.
In both cases the frontend application is unable to connect to the database. In both instances the database is also running as a container (pod) in OpenShift. And the web application uses the service name as the url. Both applications have worked in other OpenShift environments.
Version
OpenShift Master: v1.5.1+7b451fc
Kubernetes Master: v1.5.2+43a9be4
Installed using Ansible Openshift
Single node, with master on this node
Host OS: CentOS 7 Minimal
I am not sure where to look in OpenShift to debug this issue. The only way I was able to reach the db pod from the web pod was using the cluster ip address.
In order for the internal DNS resolution to work, you need to ensure dnsmasq.service is running, /etc/resolv.conf contains the IP address of the OCP node itself instead of other DNS servers (these should be in /etc/dnsmasq.d/origin-upstream-dns.conf).
Example:
# ip a s eth0
...
inet 10.0.0.1/24
# cat /etc/resolv.conf
...
nameserver 10.0.0.1
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
^^ note the dispatcher script in the /etc/resolv.conf
# systemctl status dnsmasq.service
● dnsmasq.service - DNS caching server.
Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled)
Active: active (running)
# cat /etc/dnsmasq.d/origin-dns.conf
no-resolv
domain-needed
server=/cluster.local/172.18.0.1
^^ this IP should be kubernetes service IP (oc get svc -n default)
# cat /etc/dnsmasq.d/origin-upstream-dns.conf
server=<dns ip 1>
server=<dns ip 2>
If the OpenShift is running on some kind of OpenStack instance, AWS or similar, it might happen that cloud-init does not trigger the NetworkManager dispatcher script, therefore the resolv.conf is not modified to point to dnsmasq. Try to restart whole network, e.g.:
# systemctl restart network.service
I hope this helps.
I have been facing issues connecting to databases as well using SkyDNS e.g phpMyAdmin, as a workaround I tried entering the ClusterIP instead of the SkyDNS name, and it worked, have you tried using service ClusterIP instead?
We ended up upgrading openshift from 4.5.3 to 4.5.7 for now and observing the status.
Looks like it is SkyDNS issue and I wonder if this will get resolved or not in 4.5.7 onwards.
The below commands will let you know if there if DNS requests failed or got resolved. Try running on the bastion node.
Sticky ( local DNS Query )
DST_HOST=kubernetes.default.svc.cluster.local; while read wide; do pod=$(echo ${wide} | awk '{print $1}'); node=$(echo ${wide} | awk '{print $7}'); while read wide2; do ip=$(echo ${wide2} | awk '{print $6}'); node2=$(echo ${wide2} | awk '{print $7}'); echo -ne "`date +"%Y-%m-%d %T"` : ${pod}(${node}) querying ${DST_HOST} via ${ip}(${node2}): "; oc exec -n openshift-dns ${pod} -- dig ${DST_HOST} +short &>/dev/null; test "$?" -eq "0" && echo ok || echo failed; done < <(oc get pods -n openshift-dns -o wide --no-headers); done < <(oc get pods -n openshift-dns -o wide --no-headers)
Random ( Sprayed DNS Query, see if they give same results as above )
DST_HOST=kubernetes.default.svc.cluster.local; while read wide; do pod=$(echo ${wide} | awk '{print $1}'); node=$(echo ${wide} | awk '{print $7}'); while read wide2; do ip=$(echo ${wide2} | awk '{print $6}'); node2=$(echo ${wide2} | awk '{print $7}'); echo -ne "`date +"%Y-%m-%d %T"` : ${pod}(${node}) querying ${DST_HOST} via ${ip}(${node2}): "; oc exec -n openshift-dns ${pod} -- dig #${ip} ${DST_HOST} -p 5353 +short &>/dev/null; test "$?" -eq "0" && echo ok || echo failed; done < <(oc get pods -n openshift-dns -o wide --no-headers); done < <(oc get pods -n openshift-dns -o wide --no-headers)
In OpenShift skydns is part of master, you can restart master to restart internal dns, but I suggest you try this:
1. Check whether dns can resolve your service name using dig
2. If it fail it's the dns problem, or it's the iptables problem, you can try restart the kube proxy(part of the node service) to sync the proxy rules.
If route is not reachable it's dns issue