Kubernetes: no such host for modify hostname slave - kubernetes

Version: kubeadm and kubectl 1.12
I get this error when I change the hostname of one of the Kubernetes slaves.
The error I get from the metric service:
dial tcp: lookup ops-kube-slave-dev-1 on 10.96.0.10:53: no such host
This IP is not the public or private IP and is completely random.
I add this to the file /etc/hosts:
10.0.1.248 ops-kube-slave-dev-2
10.0.1.154 ops-kube-slave-dev-1
When I execute $ nslookup ops-kube-slave-dev-2, I get the correct IP.
But still the same error. I want to avoid that for every new node I add to create a new certificate again.
What is the best solution for auto join slave node?

The solution is to provide --hostname-override option to the kubelet configuration (in my case, /etc/systemd/system/kubelet.service.d/10-kubeadm.conf). Let you allow to change the kubernetes nodename without regenerating the certificates.
For more info, see https://prefetch.net/blog/2017/12/30/getting-your-kubernetes-node-names-right/.
PS: On the secondary note,The IP you're talking about is not random, that is the IP of kubedns service of your cluster. You can check it using $ kubectl get svc -n kube-system.
Hope this helps.

Related

how to access pods from host?

I'm running colima with kubernetes like:
colima start --kuberenetes
I created a few running pods, and I want to see access the through the browsers.
But I don't know what is the colima IP (or kubernetes node IP).
help appreciated
You can get the nodeIp so:
kubectl get node
NAME STATUS ROLES AGE VERSION
nodeName Ready <none> 15h v1.26.0
Then with the nodeName:
kubectl describe node nodeName
That gives you a descrition of the node and you should look for this section:
Addresses:
InternalIP: 10.165.39.165
Hostname: master
Ping it to verify the network.
Find your host file on Mac and make an entry like:
10.165.39.165 test.local
This let you access the cluster with a domain name.
Ping it to verify.
You can not access from outside the cluster a ClusterIp.
To access your pod you have several possibilities.
if your service is type ClusterIp, you can create a temporary connection from your host with a port forward.
kubectl port-forward svc/yourservicename localport:podport
(i would raccomend this) create a service type: NodePort
Then
kubectl get svc -o wide
Shows you the NodePort: between(30000-32000).
You can access now the Pod by: test.local:nodePort or Ipaddress:NodePort.
Note: If you deployed in a namespace other than default, add -n yournamespace in the kubectl commands.
Update:
if you want to start colima with an ipAddress, first find one of your local network which is available.
Your network setting you can get with:
ifconfig
find the network. Should be the same of that of your Internet router.
Look for the subnet. Most likely 255.255.255.0.
The value to pass then:
--network-address xxx.xxx.xxx.xxx/24
In case the subnet is 255.255.0.0 then /16. But i dont think, if you are connect from home. Inside a company however this is possible.
Again check with ping and follow the steps from begining to verify the kubernetes node configuration.

Unable to deploy WSO2 APIM in Minikube Kubernetes cluster

I'm trying to deploy WSO2 APIM on Kubernetes using the pattern-1 described on the github page https://github.com/wso2/kubernetes-apim. I have added my minikube ip to my etc/hosts file as follows:
[minikube ip] am.wso2.com gateway.am.wso2.com
I'm unable to access the Publisher and Devportal using this url:https://am.wso2.com/publisher
Is there any other configuration that needs to be done? Any help would be great:). Thanks in advance..
First, make sure all your WSO2 pods are running and they're in the ready state.
kubectl get po -n <your_namespace>
This should output.
Then make sure you have enabled Ingress addon.
minikube addons list
Then make sure Ingress pods are running.
kubectl get po -n ingress-nginx
Next, get the Ingress external IP.
kubectl get ing -A
Get the external IP and the Host from the above and add a entry to the /etc/hosts as shown below.
If everything is in place you should be able to access the Publisher by going to https://am.wso2.com/
Try to run the below command in the command line.
minikube tunnel

kubernetes pod cannot resolve local hostnames but can resolve external ones like google.com

I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as wrkr1 or wrkr2 (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.
My cluster is a basic setup with one master and 2 worker nodes.
What works from within the pod:
curl to google.com from pod -- works
curl to another service(kubernetes) from pod -- works
curl to another machine on same LAN via its IP address such as 192.168.x.x -- works
curl to another machine on same LAN via its hostname such as wrkr1 -- does not work
What works from the node hosting pod:
curl to google.com --works
curl to another machine on same LAN via
its IP address such as 192.168.x.x -- works
curl to another machine
on same LAN via its hostname such as wrkr1 -- works.
Note: the pod cidr is completely different from the IP range used in
LAN
the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)
Kubernetes Version: 1.19.14
Ubuntu Version: 18.04 LTS
Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?
What happens
Need help as to whether this is normal behavior
This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in /etc/hosts because pods simply don't have access to this file.
I read somewhere that a pod inherits its nodes DNS resolution so I've
kept the entry
This is a point where tricky thing happens. There are four available DNS policies which are applied per pod. We will take a look at two of them which are usually used:
"Default": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.
"ClusterFirst": Any DNS query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured
The trickiest ever part is this (from the same link above):
Note: "Default" is not the default DNS policy. If dnsPolicy is not
explicitly specified, then "ClusterFirst" is used.
That means that all pods that do not have DNS policy set will be run with ClusterFirst and they won't be able to see /etc/resolv.conf on the host. I tried changing this to Default and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.
For example coredns deployment is run with Default dnsPolicy which allows coredns to resolve hosts.
How this can be resolved
1. Add local domain to coreDNS
This will require to add A records per host. Here's a part from edited coredns configmap:
This should be within .:53 { block
file /etc/coredns/local.record local
This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):
local.record: |
local. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
wrkr1. IN A 172.10.10.10
wrkr2. IN A 172.11.11.11
Then coreDNS deployment should be added to include this file:
$ kubectl edit deploy coredns -n kube-system
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: local.record # 1st line to add
path: local.record # 2nd line to add
name: coredns
And restart coreDNS deployment:
$ kubectl rollout restart deploy coredns -n kube-system
Just in case check if coredns pods are running and ready:
$ kubectl get pods -A | grep coredns
kube-system coredns-6ddbbfd76-mk2wv 1/1 Running 0 4h46m
kube-system coredns-6ddbbfd76-ngrmq 1/1 Running 0 4h46m
If everything's done correctly, now newly created pods will be able to resolve hosts by their names. Please find an example in coredns documentation
2. Set up DNS server in the network
While avahi looks similar to DNS server, it does not act like a DNS server. It's not possible to setup requests forwarding from coredns to avahi, while it's possible to proper DNS server in the network and this way have everything will be resolved.
3. Deploy avahi to kubernetes cluster
There's a ready image with avahi here. If it's deployed into the cluster with dnsPolicy set to ClusterFirstWithHostNet and most importantly hostNetwork: true it will be able to use host adapter to discover all available hosts within the network.
Useful links:
Pods DNS policy
Custom DNS entries for kubernetes

Failed calling webhook "namespace.sidecar-injector.istio.io"

I have make my deployment work with istio ingressgateway before. I am not aware of any changes made in istio or k8s side.
When I tried to deploy, I see an error in replicaset side that's why it cannot create new pod.
Error creating: Internal error occurred: failed calling webhook
"namespace.sidecar-injector.istio.io": Post
"https://istiod.istio-system.svc:443/inject?timeout=10s": dial tcp
10.104.136.116:443: connect: no route to host
When I try to go inside api-server and ping 10.104.136.116 (istiod service IP) it just hangs.
What I have tried so far:
Deleted all coredns pods
Deleted all istiod pods
Deleted all weave pods
Reinstalling istio via istioctl x uninstall --purge
turning all of VMs firewall
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -F
restarted all of the nodes
manual istio pod injection
Setup
k8s version: 1.21.2
istio: 1.10.3
HA setup
CNI: weave
CRI: containerd
In my case this was related to firewall. More info can be found here.
The gist of it is that on GKE at least you need to open another port 15017 in addition to 10250 and 443. This is to allow communication from your master node(s) to you VPC.
I don't have a definite answer unto why is this happening. But kube-apiserver cannot access istiod via service IP, wherein it can connect when I used the istiod pod IP.
Since I don't have the control over the VM and lower networking layer and not sure if they have changed something (because it is working before).
I made this work by changing my CNI from weave to flannel
In my case it was due to firewall. Following this Istio debug guide, I identified that the kubectl get --raw /api/v1/namespaces/istio-system/services/https:istiod:https-webhook/proxy/inject -v4 command was timing out while all other cluster internal calls were ok.
The best way to diagnose this is to open temporarly your AWS Security Groups involved to 0.0.0.0/0 for port 15017 and then try again.
If the errror won't show again, you know there's need to fix this part.
I am using EKS with Amazon VPC CNI v1.12.2-eksbuild.1

Can't access kubernetes service which have externalTrafficPolicy as "Local"

I'm following this guide to preserve source ip for service type nodeport.
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
At this point my service is accessible externally with nodeip:nodeport
When I change the service traffic policy,
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
my service is not accessible.
I found a similar issue , But the solution is not much helpful or not understandable for me . I saw some github threads which says its something to do with hostname override in kube proxy , I'm not clear with it too.
I'm using kubernetes version v1.15.3. Kube proxy is running in iptables mode. I have a single master node and few worker nodes.
I'm facing the same issue in my minikube too.
Any help would be greatly appreciated.
From the docs here
If there are no local endpoints, packets sent to the node are dropped
So you need to use the correct node IP of the kubernetes node to access the service. Here correct node IP is the node's IP where the pod is scheduled.
This is not necessary if you can make sure every node(master and workers) has a replica of the pod.