No matches for kind "Ingress" in version "networking.k8s.io/v1" - kubernetes

I am getting an error message indicating that the Kubernetes resource defined in the file "argocdingress.yaml" is of type "Ingress", but the version of the Kubernetes API being used (networking.k8s.io/v1) does not have a resource type called "Ingress".*
error: unable to recognize "argocdingress.yaml": no matches for kind "Ingress" in version "networking.k8s.io/v1"
[root#uat-master01 yaml]# kubectl apply -f argocdingress.yaml
error: unable to recognize "argocdingress.yaml": no matches for kind "Ingress" in version "networking.k8s.io/v1"
argocdingress.yaml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: argocdingressnamespace: argocdannotations:nginx.ingress.kubernetes.io/rewrite-target: /spec:rules:
host: argocd.fonepay.comhttp:paths:
path: /pathType: Prefixpath: /pathType: Prefixbackend:your textservice:name: argocd-serverport:name: http
[root#uat-master01 yaml]# kubectl get all -n argocd
NAME READY STATUS RESTARTS AGEpod/argocd-redis-66b48966cb-v4vwg 1/1 Running 0 19hpod/argocd-repo-server-7d956f8689-bl5dr 1/1 Running 0 19hpod/argocd-server-598494dbc7-pd2g5 1/1 Running 0 19hpod/argocd-application-controller-0 1/1 Running 0 19hpod/argocd-dex-server-8b975c7cc-42b26 1/1 Running 0 19h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/argocd-applicationset-controller ClusterIP 10.43.157.126 <none> 7000/TCP,8080/TCP 19hservice/argocd-dex-server ClusterIP 10.43.120.182 <none> 5556/TCP,5557/TCP,5558/TCP 19hservice/argocd-metrics ClusterIP 10.43.202.235 <none> 8082/TCP 19hservice/argocd-notifications-controller-metrics ClusterIP 10.43.99.138 <none> 9001/TCP 19hservice/argocd-redis ClusterIP 10.43.118.67 <none> 6379/TCP 19hservice/argocd-repo-server ClusterIP 10.43.9.59 <none> 8081/TCP,8084/TCP 19hservice/argocd-server ClusterIP 10.43.251.211 <none> 80/TCP,443/TCP 19hservice/argocd-server-metrics ClusterIP 10.43.5.168 <none> 8083/TCP 19h
NAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/argocd-redis 1/1 1 1 19hdeployment.apps/argocd-repo-server 1/1 1 1 19hdeployment.apps/argocd-server 1/1 1 1 19hdeployment.apps/argocd-dex-server 1/1 1 1 19h
NAME DESIRED CURRENT READY AGEreplicaset.apps/argocd-redis-66b48966cb 1 1 1 19hreplicaset.apps/argocd-repo-server-7d956f8689 1 1 1 19hreplicaset.apps/argocd-server-598494dbc7 1 1 1 19hreplicaset.apps/argocd-dex-server-8b975c7cc 1 1 1 19h
NAME READY AGEstatefulset.apps/argocd-application-controller 1/1 19h
nodeInfo:architecture: amd64bootID: 031dadd6-5586-453e-8add-ade7591791b7containerRuntimeVersion: containerd://1.3.3-k3s2kernelVersion: 3.10.0-1160.81.1.el7.x86_64kubeProxyVersion: v1.18.9+k3s1kubeletVersion: v1.18.9+k3s1machineID: 091bbe3a46464c0bac4082cd351b8db4operatingSystem: linuxosImage: CentOS Linux 7 (Core)systemUUID: CADD0442-7251-AE77-6375-D151A94CF9A7
Check by updating
apiVersion: networking.k8s.io/v1

Related

istio integration with prometheus

I'm using microk8s installation of K8S and have installed Istio and Prometheus using:
microk8s enable prometheus
and
microk8s enable istio
I have also successfuly added the builtin prometheus and grafana of istio following the doc. Howeve, now I have two separate prometheus and grafana in my cluster 1. the one I installed using microk8s and 2. the one installed from istio.
I want to follow the instruction of option 2 in the doc to add extra configuration to the microk8s prometheus following Istio prometheus federation, this site. But I don't know where exactly I should add the configs mentioned in the link to my existing microk8s prometheus. Here is the list of all objects in my monitroing namespace (microk8s prometheus):
(base) ➜ inference kubectl get all -n monitoring
NAME READY STATUS RESTARTS AGE
pod/alertmanager-main-0 2/2 Running 2 (6h3m ago) 19h
pod/prometheus-adapter-59df95d9f5-c6v99 1/1 Running 1 (6h3m ago) 19h
pod/prometheus-operator-7775c66ccf-bl7qj 2/2 Running 2 (6h3m ago) 19h
pod/blackbox-exporter-55c457d5fb-rg9x4 3/3 Running 3 (6h3m ago) 19h
pod/node-exporter-sx4fw 2/2 Running 2 (6h3m ago) 19h
pod/kube-state-metrics-76f6cb7996-xmpss 3/3 Running 3 (6h3m ago) 19h
pod/grafana-6dd5b5f65-l44p4 1/1 Running 1 (6h3m ago) 19h
pod/prometheus-adapter-59df95d9f5-ksp85 1/1 Running 1 (6h3m ago) 19h
pod/prometheus-k8s-0 2/2 Running 3 (6h3m ago) 19h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-operator ClusterIP None <none> 8443/TCP 19h
service/alertmanager-main ClusterIP 10.152.183.234 <none> 9093/TCP 19h
service/blackbox-exporter ClusterIP 10.152.183.176 <none> 9115/TCP,19115/TCP 19h
service/kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 19h
service/node-exporter ClusterIP None <none> 9100/TCP 19h
service/prometheus-adapter ClusterIP 10.152.183.240 <none> 443/TCP 19h
service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 19h
service/prometheus-operated ClusterIP None <none> 9090/TCP 19h
service/prometheus-k8s NodePort 10.152.183.52 <none> 9090:30090/TCP 19h
service/grafana NodePort 10.152.183.15 <none> 3000:30300/TCP 19h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-exporter 1 1 1 1 1 kubernetes.io/os=linux 19h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-operator 1/1 1 1 19h
deployment.apps/blackbox-exporter 1/1 1 1 19h
deployment.apps/kube-state-metrics 1/1 1 1 19h
deployment.apps/grafana 1/1 1 1 19h
deployment.apps/prometheus-adapter 2/2 2 2 19h
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-operator-7775c66ccf 1 1 1 19h
replicaset.apps/blackbox-exporter-55c457d5fb 1 1 1 19h
replicaset.apps/kube-state-metrics-76f6cb7996 1 1 1 19h
replicaset.apps/grafana-6dd5b5f65 1 1 1 19h
replicaset.apps/prometheus-adapter-59df95d9f5 2 2 2 19h
NAME READY AGE
statefulset.apps/alertmanager-main 1/1 19h
statefulset.apps/prometheus-k8s 1/1 19h
(base) ➜ inference
Which one should I edit? I'm very confused here.

message: "services "kubernetes-dashboard" not found",

Kubernetes Dashboard docs ( https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ )
say:
Kubectl will make Dashboard available at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
but I get this when using that URL:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kubernetes-dashboard" not found",
reason: "NotFound",
details: {
name: "kubernetes-dashboard",
kind: "services"
},
code: 404
}
FYI
[vagrant#master ~]$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-54f8cbd98d-92d2m 1/1 Running 0 19m
pod/coredns-54f8cbd98d-v487z 1/1 Running 0 19m
pod/etcd-master.vagrant.vm 1/1 Running 0 19m
pod/kube-apiserver-master.vagrant.vm 1/1 Running 0 18m
pod/kube-controller-manager-master.vagrant.vm 1/1 Running 1 19m
pod/kube-flannel-ds-2tr49 1/1 Running 0 18m
pod/kube-flannel-ds-552d5 1/1 Running 0 13m
pod/kube-flannel-ds-dbv5p 1/1 Running 0 16m
pod/kube-proxy-29l59 1/1 Running 0 13m
pod/kube-proxy-ctsnx 1/1 Running 0 16m
pod/kube-proxy-swxhg 1/1 Running 0 19m
pod/kube-scheduler-master.vagrant.vm 1/1 Running 1 19m
pod/kubernetes-dashboard-86b8d78468-6c52b 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 19m
service/kubernetes-dashboard ClusterIP 10.103.110.54 <none> 443/TCP 19m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds 3 3 3 3 3 beta.kubernetes.io/arch=amd64 19m
daemonset.apps/kube-proxy 3 3 3 3 3 <none> 19m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2 2 2 2 19m
deployment.apps/kubernetes-dashboard 1 1 1 1 19m
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-54f8cbd98d 2 2 2 19m
replicaset.apps/coredns-85d6cff8d8 0 0 0 19m
replicaset.apps/kubernetes-dashboard-86b8d78468 1 1 1 19m
The docs are probably not updated. Since the service is created in kube-system namespace you can use below
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

k3s - can't access from one pod to another if pods on different master nodes (HighAvailability setup)

k3s - can't access from one pod to another if pods on different nodes
Update:
I've narrowed the issue down - it's pods that are on other master nodes that can't communicate with those on the original master
pods on rpi4-server1 - the original cluster - can communicate with pods on rpi-worker01 and rpi3-worker02
pods on rpi4-server2 are unable to communicate with the others
I'm trying to run a HighAvailability cluster with embedded DB and using flannel / vxlan
I'm trying to setup a project with 5 services in k3s
When all of the pods are contained on a single node, they work together fine.
As soon as I add other nodes into the system and pods are deployed to them, the links seem to break.
In troubleshooting I've exec'd into one of the pods and tried to curl another. When they are on the same node this works, if the second service is on another node it doesn't.
I'm sure this is something simple that I'm missing, but I can't work it out! Help appreciated.
Key details:
Using k3s and native traefik
Two rpi4s as servers (High Availability) and two rpi3s as worker nodes
metallb as loadbalancer
Two services - blah-interface and blah-svc are configured as LoadBalancer to allow external access. The others blah-server, n34 and test-apisas NodePort to support debugging, but only really need internal access
Info on nodes, pods and services....
pi#rpi4-server1:~/Projects/test_demo_2020/test_kube_config/testchart/templates $ sudo kubectl get nodes --all-namespaces -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rpi4-server1 Ready master 11h v1.17.0+k3s.1 192.168.0.140 <none> Raspbian GNU/Linux 10 (buster) 4.19.75-v7l+ docker://19.3.5
rpi-worker01 Ready,SchedulingDisabled <none> 10h v1.17.0+k3s.1 192.168.0.41 <none> Raspbian GNU/Linux 10 (buster) 4.19.66-v7+ containerd://1.3.0-k3s.5
rpi3-worker02 Ready,SchedulingDisabled <none> 10h v1.17.0+k3s.1 192.168.0.142 <none> Raspbian GNU/Linux 10 (buster) 4.19.75-v7+ containerd://1.3.0-k3s.5
rpi4-server2 Ready master 10h v1.17.0+k3s.1 192.168.0.143 <none> Raspbian GNU/Linux 10 (buster) 4.19.75-v7l+ docker://19.3.5
pi#rpi4-server1:~/Projects/test_demo_2020/test_kube_config/testchart/templates $ sudo kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system helm-install-traefik-l2z6l 0/1 Completed 2 11h 10.42.0.2 rpi4-server1 <none> <none>
test-demo n34-5c7b9475cb-zjlgl 1/1 Running 1 4h30m 10.42.0.32 rpi4-server1 <none> <none>
kube-system metrics-server-6d684c7b5-5wgf9 1/1 Running 3 11h 10.42.0.26 rpi4-server1 <none> <none>
metallb-system speaker-62rkm 0/1 Pending 0 99m <none> rpi-worker01 <none> <none>
metallb-system speaker-2shzq 0/1 Pending 0 99m <none> rpi3-worker02 <none> <none>
metallb-system speaker-2mcnt 1/1 Running 0 99m 192.168.0.143 rpi4-server2 <none> <none>
metallb-system speaker-v8j9g 1/1 Running 0 99m 192.168.0.140 rpi4-server1 <none> <none>
metallb-system controller-65895b47d4-pgcs6 1/1 Running 0 90m 10.42.0.49 rpi4-server1 <none> <none>
test-demo blah-server-858ccd7788-mnf67 1/1 Running 0 64m 10.42.0.50 rpi4-server1 <none> <none>
default nginx2-6f4f6f76fc-n2kbq 1/1 Running 0 22m 10.42.0.52 rpi4-server1 <none> <none>
test-demo blah-interface-587fc66bf9-qftv6 1/1 Running 0 22m 10.42.0.53 rpi4-server1 <none> <none>
test-demo blah-svc-6f8f68f46-gqcbw 1/1 Running 0 21m 10.42.0.54 rpi4-server1 <none> <none>
kube-system coredns-d798c9dd-hdwn5 1/1 Running 1 11h 10.42.0.27 rpi4-server1 <none> <none>
kube-system local-path-provisioner-58fb86bdfd-tjh7r 1/1 Running 31 11h 10.42.0.28 rpi4-server1 <none> <none>
kube-system traefik-6787cddb4b-tgq6j 1/1 Running 0 4h50m 10.42.1.23 rpi4-server2 <none> <none>
default testdemo2020-testchart-6f8d44b496-2hcfc 1/1 Running 1 6h31m 10.42.0.29 rpi4-server1 <none> <none>
test-demo test-apis-75bb68dcd7-d8rrp 1/1 Running 0 7m13s 10.42.1.29 rpi4-server2 <none> <none>
pi#rpi4-server1:~/Projects/test_demo_2020/test_kube_config/testchart/templates $ sudo kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 11h <none>
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 11h k8s-app=kube-dns
kube-system metrics-server ClusterIP 10.43.74.118 <none> 443/TCP 11h k8s-app=metrics-server
kube-system traefik-prometheus ClusterIP 10.43.78.135 <none> 9100/TCP 11h app=traefik,release=traefik
test-demo blah-server NodePort 10.43.224.128 <none> 5055:31211/TCP 10h io.kompose.service=blah-server
default testdemo2020-testchart ClusterIP 10.43.91.7 <none> 80/TCP 10h app.kubernetes.io/instance=testdemo2020,app.kubernetes.io/name=testchart
test-demo traf-dashboard NodePort 10.43.60.155 <none> 8080:30808/TCP 10h io.kompose.service=traf-dashboard
test-demo test-apis NodePort 10.43.248.59 <none> 8075:31423/TCP 7h11m io.kompose.service=test-apis
kube-system traefik LoadBalancer 10.43.168.18 192.168.0.240 80:30688/TCP,443:31263/TCP 11h app=traefik,release=traefik
default nginx2 LoadBalancer 10.43.249.123 192.168.0.241 80:30497/TCP 92m app=nginx2
test-demo n34 NodePort 10.43.171.206 <none> 7474:30474/TCP,7687:32051/TCP 72m io.kompose.service=n34
test-demo blah-interface LoadBalancer 10.43.149.158 192.168.0.242 80:30634/TCP 66m io.kompose.service=blah-interface
test-demo blah-svc LoadBalancer 10.43.19.242 192.168.0.243 5005:30005/TCP,5006:31904/TCP,5002:30685/TCP 51m io.kompose.service=blah-svc
Hi you issue could be related to the following issue.
After configuring the network under /etc/systemd/network/eth0.network (filename may differ in your case, since i am using arch linux on all pis)
[Match]
Name=eth0
[Network]
Address=x.x.x.x/24 # ip of node
Gateway=x.x.x.x # ip of gateway router
Domains=default.svc.cluster.local svc.cluster.local cluster.local
DNS=10.x.x.x # k3s dns ip x.x.x.x # ip of gateway router
After that I removed the 10.x.x.x routes with ip route del 10.x.x.x dev [flannel|cni0] on every node and restarted them.

kube-system pods core-dns and dashboard are pending

[root#master /]# kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5644d7b6d9-97vkp 0/1 Pending 0 17h
kube-system pod/coredns-5644d7b6d9-p7mnl 0/1 Pending 0 17h
kube-system pod/etcd-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-apiserver-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-controller-manager-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-flannel-ds-amd64-r2rp8 1/1 Running 0 17h
kube-system pod/kube-flannel-ds-amd64-xp25f 1/1 Running 1 49m
kube-system pod/kube-proxy-k4hw5 1/1 Running 0 17h
kube-system pod/kube-proxy-nrzrv 1/1 Running 0 49m
kube-system pod/kube-scheduler-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kubernetes-dashboard-7c54d59f66-9w5b7 0/1 Pending 0 45m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
kube-system service/heapster ClusterIP 10.98.205.214 <none> 80/TCP 45m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 17h
kube-system service/kubernetes-dashboard ClusterIP 10.105.192.154 <none> 443/TCP 45m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 2 2 2 2 2 beta .kubernetes.io/arch=amd64 17h
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta .kubernetes.io/arch=arm 17h
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta .kubernetes.io/arch=arm64 17h
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta .kubernetes.io/arch=ppc64le 17h
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta .kubernetes.io/arch=s390x 17h
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 beta .kubernetes.io/os=linux 17h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/2 2 0 17h
kube-system deployment.apps/kubernetes-dashboard 0/1 1 0 45m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5644d7b6d9 2 2 0 17h
kube-system replicaset.apps/kubernetes-dashboard-7c54d59f66 1 1 0 45m
Can you send the logs for the pods in pending state. Also try to describe the pod and see why it is unschedulable ? (Insufficient Resources, Disk pressure, etc). Just by looking at this output it is tough to say. Also tell which version of kubernetes are you using, how many nodes you have, resources available per node, etc.
After reading your updates:
Looks like a systemd issue where docker is not able to find the cni settings. Can you try applying a networking on your cluster like weave or flannel, and see if it works?
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Helm error: dial tcp *:10250: i/o timeout

Created a local cluster using Vagrant + Ansible + VirtualBox. Manually deploying works fine, but when using Helm:
:~$helm install stable/nginx-ingress --name nginx-ingress-controller --set rbac.create=true
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.0.52.15:10250: i/o timeout
Kubernetes cluster info:
:~$kubectl get nodes,po,deploy,svc,ingress --all-namespaces -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/ubuntu18-kube-master Ready master 32m v1.13.3 10.0.51.15 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
node/ubuntu18-kube-node-1 Ready <none> 31m v1.13.3 10.0.52.15 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/nginx-server 1/1 Running 0 40s 10.244.1.5 ubuntu18-kube-node-1 <none> <none>
default pod/nginx-server-b8d78876d-cgbjt 1/1 Running 0 4m25s 10.244.1.4 ubuntu18-kube-node-1 <none> <none>
kube-system pod/coredns-86c58d9df4-5rsw2 1/1 Running 0 31m 10.244.0.2 ubuntu18-kube-master <none> <none>
kube-system pod/coredns-86c58d9df4-lfbvd 1/1 Running 0 31m 10.244.0.3 ubuntu18-kube-master <none> <none>
kube-system pod/etcd-ubuntu18-kube-master 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-apiserver-ubuntu18-kube-master 1/1 Running 0 30m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-controller-manager-ubuntu18-kube-master 1/1 Running 0 30m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-jffqn 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-vc6p2 1/1 Running 0 31m 10.0.52.15 ubuntu18-kube-node-1 <none> <none>
kube-system pod/kube-proxy-fbgmf 1/1 Running 0 31m 10.0.52.15 ubuntu18-kube-node-1 <none> <none>
kube-system pod/kube-proxy-jhs6b 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-scheduler-ubuntu18-kube-master 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/tiller-deploy-69ffbf64bc-x8lkc 1/1 Running 0 24m 10.244.1.2 ubuntu18-kube-node-1 <none> <none>
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
default deployment.extensions/nginx-server 1/1 1 1 4m25s nginx-server nginx run=nginx-server
kube-system deployment.extensions/coredns 2/2 2 2 32m coredns k8s.gcr.io/coredns:1.2.6 k8s-app=kube-dns
kube-system deployment.extensions/tiller-deploy 1/1 1 1 24m tiller gcr.io/kubernetes-helm/tiller:v2.12.3 app=helm,name=tiller
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32m <none>
default service/nginx-server NodePort 10.99.84.201 <none> 80:31811/TCP 12s run=nginx-server
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 32m k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.99.4.74 <none> 44134/TCP 24m app=helm,name=tiller
Vagrantfile:
...
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
$hosts.each_with_index do |(hostname, parameters), index|
ip_address = "#{$subnet}.#{$ip_offset + index}"
config.vm.define vm_name = hostname do |vm_config|
vm_config.vm.hostname = hostname
vm_config.vm.box = box
vm_config.vm.network "private_network", ip: ip_address
vm_config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.name = hostname
vb.memory = parameters[:memory]
vb.cpus = parameters[:cpus]
vb.customize ['modifyvm', :id, '--macaddress1', "08002700005#{index}"]
vb.customize ['modifyvm', :id, '--natnet1', "10.0.5#{index}.0/24"]
end
end
end
end
Workaround for VirtualBox issue: set diffenrent macaddress and internal_ip.
It is interesting to find a solution that can be placed in one of the configuration files: vagrant, ansible roles. Any ideas on the problem?
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.0.52.15:10250: i/o timeout
You're getting bitten by a very common kubernetes-on-Vagrant bug: the kubelet believes its IP address is eth0, which is the NAT interface in Vagrant, versus using (what I hope you have) the :private_address network in your Vagrantfile. Thus, since all kubelet interactions happen directly to it (and not through the API server), things like kubectl exec and kubectl logs will fail in exactly the way you see.
The solution is to force kubelet to bind to the private network interface, or I guess you could switch your Vagrantfile to use the bridge network, if that's an option for you -- just so long as the interface isn't the NAT one.
The question is about how you manage TLS Certificates in the cluster, ensure that port 10250 is reachable.
Here is an example of how i fix it when i try to run exec a pod running in node (instance aws in my case),
resource "aws_security_group" "My_VPC_Security_Group" {
...
ingress {
description = "TLS from VPC"
from_port = 10250
to_port = 10250
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
For more details you can visit [1]: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250.html