Kubernate: Unable to ping pod ip on other node - kubernetes

Pod ips are only pinging from same node.
When i try pinging pod ip from other node/worker its not pinging.
master2#master2:~$ kubectl get pods --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6ff8cbb789-lxwqq 1/1 Running 0 6d21h 192.168.180.2 master2 <none> <none>
calico-node-4mnfk 1/1 Running 0 4d20h 10.10.41.165 node3 <none> <none>
calico-node-c4rjb 1/1 Running 0 6d21h 10.10.41.159 master2 <none> <none>
calico-node-dgqwx 1/1 Running 0 4d20h 10.10.41.153 master1 <none> <none>
calico-node-fhtvz 1/1 Running 0 6d21h 10.10.41.161 node2 <none> <none>
calico-node-mhd7w 1/1 Running 0 4d21h 10.10.41.155 node1 <none> <none>
coredns-8b5d5b85f-fjq72 1/1 Running 0 45m 192.168.135.11 node3 <none> <none>
coredns-8b5d5b85f-hgg94 1/1 Running 0 45m 192.168.166.136 node1 <none> <none>
etcd-master1 1/1 Running 0 4d20h 10.10.41.153 master1 <none> <none>
etcd-master2 1/1 Running 0 6d21h 10.10.41.159 master2 <none> <none>
kube-apiserver-master1 1/1 Running 0 4d20h 10.10.41.153 master1 <none> <none>
kube-apiserver-master2 1/1 Running 0 6d21h 10.10.41.159 master2 <none> <none>
kube-controller-manager-master1 1/1 Running 0 4d20h 10.10.41.153 master1 <none> <none>
kube-controller-manager-master2 1/1 Running 2 6d21h 10.10.41.159 master2 <none> <none>
kube-proxy-66nxz 1/1 Running 0 6d21h 10.10.41.159 master2 <none> <none>
kube-proxy-fnrrz 1/1 Running 0 4d20h 10.10.41.153 master1 <none> <none>
kube-proxy-lq5xp 1/1 Running 0 6d21h 10.10.41.161 node2 <none> <none>
kube-proxy-vxhwm 1/1 Running 0 4d21h 10.10.41.155 node1 <none> <none>
kube-proxy-zgwzq 1/1 Running 0 4d20h 10.10.41.165 node3 <none> <none>
kube-scheduler-master1 1/1 Running 0 4d20h 10.10.41.153 master1 <none> <none>
kube-scheduler-master2 1/1 Running 1 6d21h 10.10.41.159 master2 <none> <none>
When i try ping pod with ip 192.168.104.8 on node2 from node 3 its fails and says 100% data loss
master1#master1:~/cluster$ sudo kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
contentms-cb475f569-t54c2 1/1 Running 0 6d21h 192.168.104.1 node2 <none> <none>
nav-6f67d5bd79-9khmm 1/1 Running 0 6d8h 192.168.104.8 node2 <none> <none>
react 1/1 Running 0 7m24s 192.168.135.12 node3 <none> <none>
statistics-5668cd7dd-thqdf 1/1 Running 0 6d15h 192.168.104.4 node2 <none> <none>

Its was routes issue.
I was using two ips for each node eth0 and eth1.
In routes it was using eth1 on place of eth0 ip.
I disabled eth1 ips and all worked.

Related

Set pod IP using Minikube (k8s) + Calico Plugin

I'm doing some tests with minikube + calico plugin to see if I can set the pod IP on pod creation.
I've open the minikube proxy and sent:
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "pod2",
"annotations": {
"cni.projectcalico.org/ipAddrs": "[\"172.18.0.50\"]"
}
},
"spec": {
"containers": [
{
"name": "hello-node",
"image": "k8s.gcr.io/echoserver:1.4",
"ports": [
{
"containerPort": 8081
}
]
}
]
}
}
But it seems the annotation was ignored. The pod was created using another IP:
NAME READY STATUS RESTARTS AGE IP NODE
pod1 1/1 Running 0 36s 172.18.0.8 minikube
pod2 1/1 Running 0 6s 172.18.0.9 minikube
I've checked the 10-calico.conflist file, the plugin is set to use calico-ipam.
What am I missing?
Edit:
Calico version:
Client Version: v3.14.0
Git commit: c97876ba
Cluster Version: v3.14.0
Cluster Type: k8s,kdd,bgp,kubeadm
Output of kubectl get po --all-namespaces -o wide:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default hello-minikube-64b64df8c9-fd5vz 1/1 Running 1 70m 172.18.0.7 minikube <none> <none>
default pod1 1/1 Running 0 67m 172.18.0.8 minikube <none> <none>
default pod2 1/1 Running 0 66m 172.18.0.9 minikube <none> <none>
kube-system calico-kube-controllers-789f6df884-msvzm 1/1 Running 2 152m 172.18.0.6 minikube <none> <none>
kube-system calico-node-5l2vm 1/1 Running 1 152m 172.17.0.2 minikube <none> <none>
kube-system calicoctl 1/1 Running 1 121m 172.17.0.2 minikube <none> <none>
kube-system coredns-66bff467f8-8hmpv 1/1 Running 3 28h 172.18.0.5 minikube <none> <none>
kube-system coredns-66bff467f8-xwrpj 1/1 Running 3 28h 172.18.0.3 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 2 27h 172.17.0.2 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 2 27h 172.17.0.2 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 3 28h 172.17.0.2 minikube <none> <none>
kube-system kube-proxy-wq29b 1/1 Running 3 28h 172.17.0.2 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 3 28h 172.17.0.2 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 5 28h 172.17.0.2 minikube <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-84bfdf55ff-kj4t2 1/1 Running 3 28h 172.18.0.4 minikube <none> <none>
kubernetes-dashboard kubernetes-dashboard-696dbcc666-qxc78 1/1 Running 5 28h 172.18.0.2 minikube <none> <none>

Kubernetes bind address

I have previously setup kubernetes clusters in dev environments, using private servers without any issues.
Now i created a new cluster in a datacenter (hetzner)
I been trying to get everything working for several days now, reinstalling the servers many times, facing the same issues every time.
Most of my services seem to have network issues, for example the dashboard, dockerreg ui, ... cannot access the resources loaded by the web interfaces. Even pushing a container to the private dockerreg start but stops and timeout after few seconds.
If i configure any of the services with issues to the node port they work find.
So this is probably an issue with the kube-proxy.
All of my servers (3x master node and 2x worker node) have a public and private ip address. when i get a list of pods, all thoses that are running on the host ip, use the external ip instead of the internal ip.
How can i bind these to use the internal ip only?
kubectl get pods -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-65b8787765-zj728 1/1 Running 2 12h 192.168.57.14 k8s-master-001 <none> <none>
calico-node-cxn2p 1/1 Running 1 12h <external ip> k8s-master-003 <none> <none>
calico-node-k9g7n 1/1 Running 1 12h <external ip> k8s-master-002 <none> <none>
calico-node-mt8r7 1/1 Running 2 12h <external ip> k8s-master-001 <none> <none>
calico-node-pww9q 1/1 Running 1 12h <external ip> k8s-worker-002 <none> <none>
calico-node-wlg8g 1/1 Running 2 12h <external ip> k8s-worker-001 <none> <none>
coredns-5c98db65d4-lrzj8 1/1 Running 0 12h 192.168.20.1 k8s-worker-002 <none> <none>
coredns-5c98db65d4-s6tzv 1/1 Running 1 12h 192.168.102.17 k8s-worker-001 <none> <none>
etcd-k8s-master-001 1/1 Running 2 12h <external ip> k8s-master-001 <none> <none>
etcd-k8s-master-002 1/1 Running 1 12h <external ip> k8s-master-002 <none> <none>
etcd-k8s-master-003 1/1 Running 1 12h <external ip> k8s-master-003 <none> <none>
kube-apiserver-k8s-master-001 1/1 Running 2 12h <external ip> k8s-master-001 <none> <none>
kube-apiserver-k8s-master-002 1/1 Running 2 12h <external ip> k8s-master-002 <none> <none>
kube-apiserver-k8s-master-003 1/1 Running 1 12h <external ip> k8s-master-003 <none> <none>
kube-controller-manager-k8s-master-001 1/1 Running 3 12h <external ip> k8s-master-001 <none> <none>
kube-controller-manager-k8s-master-002 1/1 Running 1 12h <external ip> k8s-master-002 <none> <none>
kube-controller-manager-k8s-master-003 1/1 Running 1 12h <external ip> k8s-master-003 <none> <none>
kube-proxy-mlsnp 1/1 Running 1 12h <external ip> k8s-master-003 <none> <none>
kube-proxy-mzck9 1/1 Running 2 12h <external ip> k8s-worker-001 <none> <none>
kube-proxy-p7vfz 1/1 Running 1 12h <external ip> k8s-master-002 <none> <none>
kube-proxy-s55fr 1/1 Running 2 12h <external ip> k8s-master-001 <none> <none>
kube-proxy-tz6zn 1/1 Running 1 12h <external ip> k8s-worker-002 <none> <none>
kube-scheduler-k8s-master-001 1/1 Running 3 12h <external ip> k8s-master-001 <none> <none>
kube-scheduler-k8s-master-002 1/1 Running 1 12h <external ip> k8s-master-002 <none> <none>
kube-scheduler-k8s-master-003 1/1 Running 1 12h <external ip> k8s-master-003 <none> <none>
traefik-ingress-controller-gxthm 1/1 Running 1 35m 192.168.57.15 k8s-master-001 <none> <none>
traefik-ingress-controller-rdv8j 1/1 Running 0 35m 192.168.160.133 k8s-master-003 <none> <none>
traefik-ingress-controller-w4t4t 1/1 Running 0 35m 192.168.1.133 k8s-master-002 <none> <none>
im running kubernetes 1.15.3, using CRIO and calico.
all servers are on the 10.0.0.0/24 subnet
I expect the pods running on the node ip, to use the interanal ip instead of the external ip
--- Edit 16/09/2019
The cluster is initialized using the following command
sudo kubeadm init --config=kubeadm-config.yaml --upload-certs
My kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "10.0.0.2"
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "10.0.0.200:6443"
apiServer:
certSANs:
- "k8s.deb-ict.com"
networking:
serviceSubnet: "10.96.0.0/12"
podSubnet: "192.168.0.0/16"

Cannot access to Kubernetes Dashboard

I have a K8s cluster (1 master, 2 workers) running on 3 vagrant VMs on my computer.
I've installed kubernetes dashboard, like explained here.
All my pods are running correctly:
kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-n5cpm 1/1 Running 1 61m 10.244.0.4 kmaster.example.com <none> <none>
coredns-fb8b8dccf-qwcr4 1/1 Running 1 61m 10.244.0.5 kmaster.example.com <none> <none>
etcd-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-apiserver-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-controller-manager-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-hcjsm 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-klv4f 1/1 Running 3 56m 172.42.42.102 kworker2.example.com <none> <none>
kube-flannel-ds-amd64-lmpnd 1/1 Running 2 59m 172.42.42.101 kworker1.example.com <none> <none>
kube-proxy-86qsw 1/1 Running 1 59m 10.0.2.15 kworker1.example.com <none> <none>
kube-proxy-dp29s 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-proxy-gqqq9 1/1 Running 1 56m 10.0.2.15 kworker2.example.com <none> <none>
kube-scheduler-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kubernetes-dashboard-5f7b999d65-zqbbz 1/1 Running 1 28m 10.244.1.3 kworker1.example.com <none> <none>
As you can see the dashboard is in "Running" status.
I also ran kubectl proxy and it's serving on 127.0.0.1:8001.
But when I try to open http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ I have the error:
This site can’t be reached
127.0.0.1 refused to connect.
ERR_CONNECTION_REFUSED
I'm trying to open the dashboard directly on my computer, not inside the vagram VM. Could that be the problem? If yes, how to solve it ? I'm able to ping my VM from my computer without any issue.
Thanks for helping me.
EDIT
Here is the ouput of kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 96m
kubernetes-dashboard NodePort 10.109.230.83 <none> 443:30089/TCP 63m
Kubernetes dashboard runs only in the cluster as default. You can control it with get svc command:
kubectl get svc -n kube-system
Default type of that service is ClusterIp, to reach from outside of the cluster yo have to change it to NodePort.
To change it follow this doc.

Helm error: dial tcp *:10250: i/o timeout

Created a local cluster using Vagrant + Ansible + VirtualBox. Manually deploying works fine, but when using Helm:
:~$helm install stable/nginx-ingress --name nginx-ingress-controller --set rbac.create=true
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.0.52.15:10250: i/o timeout
Kubernetes cluster info:
:~$kubectl get nodes,po,deploy,svc,ingress --all-namespaces -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/ubuntu18-kube-master Ready master 32m v1.13.3 10.0.51.15 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
node/ubuntu18-kube-node-1 Ready <none> 31m v1.13.3 10.0.52.15 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/nginx-server 1/1 Running 0 40s 10.244.1.5 ubuntu18-kube-node-1 <none> <none>
default pod/nginx-server-b8d78876d-cgbjt 1/1 Running 0 4m25s 10.244.1.4 ubuntu18-kube-node-1 <none> <none>
kube-system pod/coredns-86c58d9df4-5rsw2 1/1 Running 0 31m 10.244.0.2 ubuntu18-kube-master <none> <none>
kube-system pod/coredns-86c58d9df4-lfbvd 1/1 Running 0 31m 10.244.0.3 ubuntu18-kube-master <none> <none>
kube-system pod/etcd-ubuntu18-kube-master 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-apiserver-ubuntu18-kube-master 1/1 Running 0 30m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-controller-manager-ubuntu18-kube-master 1/1 Running 0 30m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-jffqn 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-vc6p2 1/1 Running 0 31m 10.0.52.15 ubuntu18-kube-node-1 <none> <none>
kube-system pod/kube-proxy-fbgmf 1/1 Running 0 31m 10.0.52.15 ubuntu18-kube-node-1 <none> <none>
kube-system pod/kube-proxy-jhs6b 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-scheduler-ubuntu18-kube-master 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/tiller-deploy-69ffbf64bc-x8lkc 1/1 Running 0 24m 10.244.1.2 ubuntu18-kube-node-1 <none> <none>
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
default deployment.extensions/nginx-server 1/1 1 1 4m25s nginx-server nginx run=nginx-server
kube-system deployment.extensions/coredns 2/2 2 2 32m coredns k8s.gcr.io/coredns:1.2.6 k8s-app=kube-dns
kube-system deployment.extensions/tiller-deploy 1/1 1 1 24m tiller gcr.io/kubernetes-helm/tiller:v2.12.3 app=helm,name=tiller
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32m <none>
default service/nginx-server NodePort 10.99.84.201 <none> 80:31811/TCP 12s run=nginx-server
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 32m k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.99.4.74 <none> 44134/TCP 24m app=helm,name=tiller
Vagrantfile:
...
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
$hosts.each_with_index do |(hostname, parameters), index|
ip_address = "#{$subnet}.#{$ip_offset + index}"
config.vm.define vm_name = hostname do |vm_config|
vm_config.vm.hostname = hostname
vm_config.vm.box = box
vm_config.vm.network "private_network", ip: ip_address
vm_config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.name = hostname
vb.memory = parameters[:memory]
vb.cpus = parameters[:cpus]
vb.customize ['modifyvm', :id, '--macaddress1', "08002700005#{index}"]
vb.customize ['modifyvm', :id, '--natnet1', "10.0.5#{index}.0/24"]
end
end
end
end
Workaround for VirtualBox issue: set diffenrent macaddress and internal_ip.
It is interesting to find a solution that can be placed in one of the configuration files: vagrant, ansible roles. Any ideas on the problem?
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.0.52.15:10250: i/o timeout
You're getting bitten by a very common kubernetes-on-Vagrant bug: the kubelet believes its IP address is eth0, which is the NAT interface in Vagrant, versus using (what I hope you have) the :private_address network in your Vagrantfile. Thus, since all kubelet interactions happen directly to it (and not through the API server), things like kubectl exec and kubectl logs will fail in exactly the way you see.
The solution is to force kubelet to bind to the private network interface, or I guess you could switch your Vagrantfile to use the bridge network, if that's an option for you -- just so long as the interface isn't the NAT one.
The question is about how you manage TLS Certificates in the cluster, ensure that port 10250 is reachable.
Here is an example of how i fix it when i try to run exec a pod running in node (instance aws in my case),
resource "aws_security_group" "My_VPC_Security_Group" {
...
ingress {
description = "TLS from VPC"
from_port = 10250
to_port = 10250
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
For more details you can visit [1]: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250.html

Kubernetes services sometime no response

My cluster contains 1 master with 3 worker nodes in which 1 POD with 2 replica sets and 1 service are created. When I try to access the service via the command curl <ClusterIP>:<port> either from 2 worker nodes, sometimes it can feedback Nginx welcome, but sometimes it gets stuck and connection refused and timeout.
I checked the Kubernetes Service, POD and endpoints are fine, but no clue what is going on. Please advise.
vagrant#k8s-master:~/_projects/tmp1$ sudo kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 23d v1.12.2 192.168.205.10 <none> Ubuntu 16.04.4 LTS 4.4.0-139-generic docker://17.3.2
k8s-worker1 Ready <none> 23d v1.12.2 192.168.205.11 <none> Ubuntu 16.04.4 LTS 4.4.0-139-generic docker://17.3.2
k8s-worker2 Ready <none> 23d v1.12.2 192.168.205.12 <none> Ubuntu 16.04.4 LTS 4.4.0-139-generic docker://17.3.2
vagrant#k8s-master:~/_projects/tmp1$ sudo kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default my-nginx-756f645cd7-pfdck 1/1 Running 0 5m23s 10.244.2.39 k8s-worker2 <none>
default my-nginx-756f645cd7-xpbnp 1/1 Running 0 5m23s 10.244.1.40 k8s-worker1 <none>
kube-system coredns-576cbf47c7-ljx68 1/1 Running 18 23d 10.244.0.38 k8s-master <none>
kube-system coredns-576cbf47c7-nwlph 1/1 Running 18 23d 10.244.0.39 k8s-master <none>
kube-system etcd-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kube-apiserver-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kube-flannel-ds-54xnb 1/1 Running 2 2d5h 192.168.205.12 k8s-worker2 <none>
kube-system kube-flannel-ds-9q295 1/1 Running 2 2d5h 192.168.205.11 k8s-worker1 <none>
kube-system kube-flannel-ds-q25xw 1/1 Running 2 2d5h 192.168.205.10 k8s-master <none>
kube-system kube-proxy-gkpwp 1/1 Running 15 23d 192.168.205.11 k8s-worker1 <none>
kube-system kube-proxy-gncjh 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kube-proxy-m4jfm 1/1 Running 15 23d 192.168.205.12 k8s-worker2 <none>
kube-system kube-scheduler-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kubernetes-dashboard-77fd78f978-4r62r 1/1 Running 15 23d 10.244.1.38 k8s-worker1 <none>
vagrant#k8s-master:~/_projects/tmp1$ sudo kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d <none>
my-nginx ClusterIP 10.98.9.75 <none> 80/TCP 75s run=my-nginx
vagrant#k8s-master:~/_projects/tmp1$ sudo kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.205.10:6443 23d
my-nginx 10.244.1.40:80,10.244.2.39:80 101s
This sounds odd but it could be that one of your pods is serving traffic and the other is not. You can try shelling into the pods:
$ kubectl exec -it my-nginx-756f645cd7-rs2w2 sh
$ kubectl exec -it my-nginx-756f645cd7-vwzrl sh
You can see if they are listening on port 80:
$ curl localhost:80
You can also see if your service has the two endpoints 10.244.2.28:80 and 10.244.1.29:80.
$ kubectl get ep my-nginx
$ kubectl get ep my-nginx -o=yaml
Also, try to connect to each one of your endpoints from a node:
$ curl 10.244.2.28:80
$ curl 10.244.2.29:80