how to get pod to pod ping on different nodes working? - kubernetes

I would like to be able to ping from one pod to another. That works if the pods are on the same host. It does not work if the pods are on different hosts.
$ kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/bbtest-5949c4d8c5-259wx 1/1 Running 1 2d 192.168.114.158 gordon-dm1.sdsc.edu <none>
pod/busybox-7cd98849ff-m75qv 0/1 Running 0 3m 192.168.78.30 gordon-dm3.sdsc.edu <none>pod/nginx-64f497f8fd-j4qml 1/1 Running 0 20m 192.168.114.163 gordon-dm1.sdsc.edu <none>
pod/nginx-64f497f8fd-tw4vb 1/1 Running 0 22m 192.168.209.32 gordon-dm4.sdsc.edu <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d <none>
$ kubectl run busybox --rm -ti --image busybox /bin/sh
/ # ping 192.168.114.163
PING 192.168.114.163 (192.168.114.163): 56 data bytes
^C
--- 192.168.114.163 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ #
I set up flannel but it doesn't make a change. I tried felixconfiguration only to get an error : resource does not exist: FelixConfiguration(default)
Any help to get pod to pod communication to work ?

Best practice is to use a service and open the nginx specific ports that require to receive connections and use the service hostname.
Use curl -I <service-name>.<namespace> for testing.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
Result:
/ # curl -I my-nginx.default
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Sun, 03 Jan 2021 17:44:26 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 15 Dec 2020 13:59:38 GMT
Connection: keep-alive
ETag: "5fd8c14a-264"
Accept-Ranges: bytes
P.S. I used kubectl run alpine --rm -ti --image alpine /bin/sh and apk add curl

Related

How to make My First ingress work on baremetal NodeIP?

I have pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
namespace: dev
spec:
selector:
matchLabels:
app: hello
replicas: 3
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
Make service:
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
namespace: dev
labels:
app: hello
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
Check it:
---
apiVersion: v1
kind: Service
metadata:
name: hello-node-service
namespace: dev
spec:
type: NodePort
selector:
app: hello
ports:
- port: 80
targetPort: 8080
$ kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node-service NodePort 10.233.3.50 <none> 80:31263/TCP 9h
hello-service ClusterIP 10.233.45.159 <none> 80/TCP 44h
$ curl -I http://cluster.local:31263
HTTP/1.1 200 OK
Date: Sat, 11 Sep 2021 07:31:28 GMT
Content-Length: 66
Content-Type: text/plain; charset=utf-8
I have verified that the service is working.
Install ingress with NodeIP (https://kubernetes.github.io/ingress-nginx/deploy/):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml
$ kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --watch
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-7gsft 0/1 Completed 0 10h
ingress-nginx-admission-patch-qj57b 0/1 Completed 1 10h
ingress-nginx-controller-8cf5559f8-mh6fr 1/1 Running 0 10h
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.233.52.118 <none> 80:30377/TCP,443:31682/TCP 10h
ingress-nginx-controller-admission ClusterIP 10.233.51.175 <none> 443/TCP 10h
Check it:
$ curl -I http://cluster.local:30377/healthz
HTTP/1.1 200 OK
Date: Sat, 11 Sep 2021 07:39:04 GMT
Content-Type: text/html
Content-Length: 0
Connection: keep-alive
Make ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-hello
namespace: dev
spec:
rules:
- host: cluster.local
http:
paths:
- backend:
service:
name: hello-service
port:
number: 80
path: "/hello"
pathType: Prefix
Check It:
$ curl -I http://cluster.local:30377/hello
HTTP/1.1 404 Not Found
Date: Sat, 11 Sep 2021 07:40:43 GMT
Content-Type: text/html
Content-Length: 146
Connection: keep-alive
It's doesn't work. I spend few days, tried add ExternalIP to ingress controller.
Can you please tell me who had the experience of setting up ingress, what am I doing wrong?
=(((
INFO about cluster:
$ kubectl get ingress -n dev
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-hello <none> cluster.local 80 10h
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kuber-ingress-01 Ready worker 10d v1.21.3
kuber-master1 Ready control-plane,master 10d v1.21.3
kuber-master2 Ready control-plane,master 10d v1.21.3
kuber-master3 Ready control-plane,master 10d v1.21.3
kuber-node-01 Ready worker 10d v1.21.3
kuber-node-02 Ready worker 10d v1.21.3
kuber-node-03 Ready worker 10d v1.21.3
Inventory:
kuber-master1 10.0.57.31
kuber-master2 10.0.57.32
kuber-master3 10.0.57.33
kuber-node-01 10.0.57.34
kuber-node-02 10.0.57.35
kuber-node-03 10.0.57.36
kuber-ingress-01 10.0.57.30
$ ping cluster.local
PING cluster.local (10.0.57.30) 56(84) bytes of data.
64 bytes from ingress.example.com (10.0.57.30): icmp_seq=1 ttl=62 time=0.603 ms
The solution is to add the following content to the ingress - annotation.
Then the ingress controller starts to see the DNS addresses.
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
Also, for convenience, changed path: / to a regular expression:
- path: /v1(/|$)(.*)

Kubernetes: Not able to communicate within two services (different pod, same namespace)

I am not able to communicate between two services.
post-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector
tier: backend
template:
metadata:
labels:
app: python-web-selector
tier: backend
spec:
containers:
- name: python-web-pod
image: sakshiarora2012/python-backend:v10
ports:
- containerPort: 5000
post-deployment2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment2
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector2
tier: backend
template:
metadata:
labels:
app: python-web-selector2
tier: backend
spec:
containers:
- name: python-web-pod2
image: sakshiarora2012/python-backend:v8
ports:
- containerPort: 5000
post-service.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service
spec:
selector:
app: python-web-selector
tier: backend
ports:
- port: 5000
nodePort: 30400
type: NodePort
post-service2.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service2
spec:
selector:
app: python-web-selector2
tier: backend
ports:
- port: 5000
type: ClusterIP
When I go and try to ping from 1 container to another, it is not able to ping
root#python-data-deployment-7bd65dc685-htxmj:/project# ping python-data-service.default.svc.cluster.local
PING python-data-service.default.svc.cluster.local (10.107.11.236) 56(84) bytes of data.
^C
--- python-data-service.default.svc.cluster.local ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 139ms
If I see dns entry it is showing
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service.default.svc.cluster.local
Address: 10.107.11.236
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service2
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service2.default.svc.cluster.local
Address: 10.103.97.40
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dnsutils 1/1 Running 0 5m54s 172.17.0.9 minikube <none> <none>
python-data-deployment-7bd65dc685-htxmj 1/1 Running 0 47m 172.17.0.6 minikube <none> <none>
python-data-deployment2-764744b97d-mc9gm 1/1 Running 0 43m 172.17.0.8 minikube <none> <none>
python-db-deployment-d54f6b657-rfs2b 1/1 Running 0 44h 172.17.0.7 minikube <none> <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service
Name: python-data-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service","namespace":"default"},"spec":{"ports":[{"no...
Selector: app=python-web-selector,tier=backend
Type: NodePort
IP: 10.107.11.236
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30400/TCP
Endpoints: 172.17.0.6:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service2
Name: python-data-service2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service2","namespace":"default"},"spec":{"ports":[{"p...
Selector: app=python-web-selector2,tier=backend
Type: ClusterIP
IP: 10.103.97.40
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.17.0.8:5000
Session Affinity: None
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
I think if in DNS table it show if of range 172,17.0.X then it will work, but not sure why it is not showing in dns entry, Any pointers?
If you want to access python-data-service from outside the cluster using NodePort and you are using minikube you should be able to do so by using curl $(minikube service python-data-service --url) from anywhere outside the cluster i.e from your system
If you want to communicate between two microservice within the cluster then simply use ClusterIP type service instead of NodePort type.
To identity if it's a service issue or pod issue use PODIP directly in the curl command. From the output of kubectl describe svc python-data-service the Pod IP for service python-data-service is 172.17.0.6. So try curl 172.17.0.6:5000/getdata
In order to start debugging your services I would suggest the following steps:
Check that your service 1 is accessible as a Pod:
kubectl run test1 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://172.17.0.6:5000
Check that your service 2 is accessible as a Pod:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 172.17.0.8:5000
Then, check that your service 1 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.107.11.236:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service:5000
Then, check that your service 2 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.103.97.40:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service2:5000
Then, if needed, check that your service 2 is accessible through your node port (you would need to know the IP Address of the Node where the service has been exposed, for instance in minikube it should work:)
wget -O - http://192.168.99.101:30400
From your Service manifest I can recommend as a good practice to specify both port and targetPort as you can see at
https://canterafonseca.eu/kubernetes/certification/application/developer/cncf/k8s/cloud/native/computing/ckad/deployments/services/preparation-k8s-ckad-exam-part4-services.html#-services
On the other hand if you only need to expose to the outside world one of the services you can create a headless service (see also my blog post above).
Ping doesn't work on services ClusterIP addresses because they are from virtual addresses created by iptables rules that redirect packets to the endpoints(pods).
You should be able to ping a pod, but not a service.
You can use curl or wget
For example wget -qO- POD_IP:80
or You can try
wget -qO- http://your-service-name:port/yourpath
curl POD_IP:port_number
Are you able to connect to your pods maybe try port-forward to see if you can connect & then check the connectivity in two pods.
Last check if there is default deny network policy set there - maybe you have some restrictions at network level.
kubectl get networkpolicy -n <namespace>
Try to look into the logs using kubectl logs PODNAME so that you know what's happening. From first sight, I think you need to expose the ports of both services: kubectl port-forward yourService PORT:PORT.

k3s on arch linux ARM worker service not responding

Current setup:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cl01mtr01 Ready master 104m v1.18.2+k3s1 10.1.1.1 <none> Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 containerd://1.3.3-k3s2
cl01wkr01 Ready <none> 9m20s v1.18.2+k3s1 10.1.1.101 <none> Arch Linux ARM 5.4.40-1-ARCH containerd://1.3.3-k3s2
Master installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - server \
--write-kubeconfig-mode 644 \
--cluster-cidr 172.20.0.0/16 \
--service-cidr 172.21.0.0/16 \
--cluster-dns 172.21.0.10 \
--disable traefik
Worker installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - agent \
--server https://10.1.1.1:6443 \
--token <token from master>
I also tried with a raspberry pi as master running arch linux and raspbian and a rock pi 64 with armbian.
I tried with k3s versions:
v1.17.4+k3s1
v1.17.5+k3s1
v1.18.2+k3s1
I also tested with docker and the --docker install option in k3s.
The nodes get discovered (as shown above), but I cannot access the service on my worker node(s) (raspberry pi 3 with arch linux arm) via http://10.1.1.1:30001 although, it can be accessed via kubectl exec.
I always get a connection timeout
This site can’t be reached
10.1.1.1 took too long to respond.
When the pod runs on the master node, or if the worker is an amd64 node, it can be accessed via http://10.1.1.1:30001.
This is the resource I try to load and access:
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-default-configmap
namespace: nginx
data:
default.conf: |
server {
listen 80;
listen [::]:80;
#server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: nginx
spec:
ports:
- name: http
targetPort: 80
port: 80
nodePort: 30001
- name: https
targetPort: 443
port: 443
nodePort: 30002
selector:
app: nginx
type: NodePort
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
namespace: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: NotIn
values:
- "true"
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
env:
- name: TZ
value: "Europe/Brussels"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
volumeMounts:
- name: default-conf
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
readOnly: true
restartPolicy: Always
volumes:
- name: default-conf
configMap:
name: nginx-default-configmap
Some extra info:
> kubectl get all -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/local-path-provisioner-6d59f47c7-d477m 1/1 Running 0 116m 172.20.0.4 cl01mtr01 <none> <none>
kube-system pod/metrics-server-7566d596c8-fbb7b 1/1 Running 0 116m 172.20.0.2 cl01mtr01 <none> <none>
kube-system pod/coredns-8655855d6-gnbsm 1/1 Running 0 116m 172.20.0.3 cl01mtr01 <none> <none>
nginx pod/nginx-daemonset-l4j7s 1/1 Running 0 52s 172.20.1.3 cl01wkr01 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 116m <none>
kube-system service/kube-dns ClusterIP 172.21.0.10 <none> 53/UDP,53/TCP,9153/TCP 116m k8s-app=kube-dns
kube-system service/metrics-server ClusterIP 172.21.152.234 <none> 443/TCP 116m k8s-app=metrics-server
nginx service/nginx-service NodePort 172.21.14.185 <none> 80:30001/TCP,443:30002/TCP 52s app=nginx
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
nginx daemonset.apps/nginx-daemonset 1 1 1 1 1 <none> 52s nginx nginx:stable app=nginx
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
kube-system deployment.apps/local-path-provisioner 1/1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner
kube-system deployment.apps/metrics-server 1/1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server
kube-system deployment.apps/coredns 1/1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
kube-system replicaset.apps/local-path-provisioner-6d59f47c7 1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner,pod-template-hash=6d59f47c7
kube-system replicaset.apps/metrics-server-7566d596c8 1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server,pod-template-hash=7566d596c8
kube-system replicaset.apps/coredns-8655855d6 1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns,pod-template-hash=8655855d6

How to access Kubernetes deployment

I have created Docker images and deployed in k8s cluster with a minimum number of machines, setup one master and worker and both machines are up and running and talking to each other with the same VLAN network.
Please find the below pod and deployment services with described status
root#jenkins-linux-vm:/home/admin# kubectl describe services angular-service
Name: angular-service
Namespace: pre-release
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"angular-service","namespace":"pre-release"},"spec":{"ports":[{"no...
Selector: app=frontend-app
Type: NodePort
IP: 10.96.151.155
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31000/TCP
Endpoints: 10.32.0.6:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
root#jenkins-linux-vm:/home/admin# kubectl get pods
NAME READY STATUS RESTARTS AGE
angular-deployment-7b8d45f48d-b59pv 1/1 Running 0 51m
root#jenkins-linux-vm:/home/admin# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
angular-service NodePort 10.96.151.155 <none> 80:31000/TCP 64m
root#jenkins-linux-vm:/home/admin# kubectl get pods --selector="app=frontend-app" --output=wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
angular-deployment-7b8d45f48d-b59pv 1/1 Running 0 52m 10.32.0.6 poc-worker2 <none> <none>
root#jenkins-linux-vm:/home/admin# kubectl describe pods angular-deployment-7b8d45f48d-b59pv
Name: angular-deployment-7b8d45f48d-b59pv
Namespace: pre-release
Priority: 0
Node: poc-worker2/10.0.0.6
Start Time: Tue, 21 Jan 2020 05:15:49 +0000
Labels: app=frontend-app
pod-template-hash=7b8d45f48d
Annotations: <none>
Status: Running
IP: 10.32.0.6
IPs:
IP: 10.32.0.6
Controlled By: ReplicaSet/angular-deployment-7b8d45f48d
Containers:
frontend-app:
Container ID: docker://751a9fb4a5e908fa1a02eb0460ab1659904362a727a028fdf72489df663a4f69
Image: frontend-app:future-master-fix-d1afa608
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 21 Jan 2020 05:15:54 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Now the problem is I'm not able to access my application using a port, even though its not working in a web browser as well.
curl http://<public-node-ip>:<node-port>
curl http://10.0.0.6:31000
Dockr file
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
# stage 2
FROM nginx:alpine
COPY --from=node /app/dist/hello-angular /usr/share/nginx/html
root#jenkins-linux-vm:/home/admin# kubectl exec -it angular-deployment-7b8d45f48d-b59pv curl 10.96.151.155:80
curl: (7) Failed to connect to 10.96.151.155 port 80: Connection refused
command terminated with exit code 7
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl run busybox --image=busybox --restart=Never -it --rm --command -- /bin/sh -c "wget 10.96.208.252:80;cat index.html"
Connecting to 10.96.208.252:80 (10.96.208.252:80)
saving to 'index.html'
index.html 100% |********************************| 593 0:00:00 ETA
'index.html' saved
<!doctype html><html lang="en"><head><meta charset="utf-8"><title>AngularApp</title><base href="/"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" type="image/x-icon" href="favicon.ico"><link href="styles.9c0ad738f18adc3d19ed.bundle.css" rel="stylesheet"/></head><body><app-root></app-root><script type="text/javascript" src="inline.720eace06148cc3e71aa.bundle.js"></script><script type="text/javascript" src="polyfills.f20484b2fa4642e0dca8.bundle.js"></script><script type="text/javascript" src="main.11bc84b3b98cd0d00106.bundle.js"></script></body></html>pod "busybox" deleted
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl run busybox --image=busybox --restart=Never -it --rm --command -- /bin/sh -c "wget 10.0.0.6:32331;cat index.html"
Connecting to 10.0.0.6:32331 (10.0.0.6:32331)
wget: can't connect to remote host (10.0.0.6): Connection refused
cat: can't open 'index.html': No such file or directory
pod "busybox" deleted
pod pre-release/busybox terminated (Error)
I am taking a pre-built angular image from docker hub with thanks to https://github.com/nheidloff/web-apps-kubernetes/tree/master/angular-app we will use this image as baseline below.
Create and deployment and service using below yamls
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-app
spec:
replicas: 1
selector:
matchLabels:
run: angular-app
template:
metadata:
labels:
run: angular-app
spec:
containers:
- name: angular-app
image: nheidloff/angular-app
ports:
- containerPort: 80
- containerPort: 443
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: angular-app
labels:
run: angular-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: angular-app
Run as below on your cluster to create the resources
$ kubectl create -f Deployment.yaml
$ kubectl create -f Service.yaml
Should result in below deployment and service configuration
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/angular-app-694d97d56c-7m4x4 1/1 Running 0 8m23s 10.244.3.10 k8s-node-3 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/angular-app NodePort 10.96.150.136 <none> 80:32218/TCP,443:30740/TCP 8m23s run=angular-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/angular-app 1/1 1 1 8m23s angular-app nheidloff/angular-app run=angular-app
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/angular-app-694d97d56c 1 1 1 8m23s angular-app nheidloff/angular-app pod-template-hash=694d97d56c,run=angular-app
From above we can see the pod is running node-3 , so identify the ip of node 3
and we see that service has exposed below ports 32218/TCP and 30740/TCP
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready master 8d v1.17.0 111.112.113.107 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-1 Ready <none> 8d v1.17.0 111.112.113.108 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-2 Ready <none> 8d v1.17.0 111.112.113.109 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-3 Ready <none> 8d v1.17.0 111.112.113.110 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
So we need to access the app vi node3:NodePort i.e 111.112.113.110:32218 as url check below screen shot as well on how i access the app.
I have below rules open on cluster level to allow browser access the apps on default NodePort range.
NOTE : Ingress IPv4 TCP 30000 - 32767 0.0.0.0/0
To ensure you are able to open your app by nodeport in browser you should try to establish that
There are no rules blocking the default node-port range (i.e from port 30000 - to port 32767) on security rules or firewall on cluster network.
For example verify you have below security rule open on Cluster Network for nodeport range to work in browser.
Ingress IPv4 TCP 30000 - 32767 0.0.0.0/0
Once you have confirmed you have no security group rule issue. I will take below approach to debug and find whats wrong with port reachablity at node level. perform a basic Test and check if i can get nginx web server installed and reachable on browser via node port:
Steps:
Deploy a NGINX deployment using below nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Verify deployment is up and running
$ kubectl apply -f nginx.yaml
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-75897978cd-ptqv9 1/1 Running 0 32s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 33s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-75897978cd 1 1 1 33s
Now create service to expose the nginx deployment using below example
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
run: my-nginx
Verify service is created and identify the nodeport assigned (since we did not provide any fixed port in service.yaml ( like below the node port is 32502)
$ kubectl apply -f service.yaml
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
my-nginx NodePort 10.96.174.234 <none> 8080:32502/TCP 12s
In addition to the nodeport identify the ip of your master node i.e 131.112.113.101 below
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready master 4d11h v1.17.0 131.112.113.101 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-1 Ready <none> 4d11h v1.17.0 131.112.113.102 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-2 Ready <none> 4d11h v1.17.0 131.112.113.103 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
Now if you try to access the nginx application using the IP of your masternode with nodeport value like <masternode>:<nodeport> (i.e. 131.112.113.101:32502) in your browser you should get result similar to below
Note the container port used on nginx.yaml and targetPort on service.yaml (i.e. 80) you should be able to figure out this for your app better. Hope this will help you understand the issue at your node/cluster level if any.
I am not sure if I understood what you are trying to do.
Below command is to open a bash shell in the pod:
kubectl exec -it angular-deployment-7b8d45f48d-b59pv -- /bin/bash
You can connect to a pod, then try curl.
The service is defined as NodePort type.
it is using nodeport: 31000
Try hitting the below url in your browser
http://HOSTNAME:31000
hostname could be any hostname of the cluster nodes

How can I expose a Statefulset with a load balancer?

I currently trying to create a cluster of X pods witch each have a personal persistent volume. To do that I've created a StateFulSet with X replicas and a PersistentVolumeClaimTemplate This part is working.
The problem is that it's seem's to be impossible to expose thoses pods with a LoadBalancer in the same way as a deployment (because of the uniqueness of a pods in a statefulset).
At this moment I've tried to expose it as a simple deployment witch is not working and the only way I've found is to expose each pods one by one (I've not tested it but I saw it on this) but it's not that scalable...
I'm not running kubernetes on any cloud provider platform then please avoid exclusive command line.
The problem is that it's seem's to be impossible to expose thoses pods with a LoadBalancer in the same way as a deployment (because of the uniqueness of a pods in a statefulset).
Why not? Here is my StatefulSet with default Nginx
$ k -n test get statefulset
NAME DESIRED CURRENT AGE
web 2 2 5d
$ k -n test get pods
web-0 1/1 Running 0 5d
web-1 1/1 Running 0 5d
Here is my Service type LoadBalancer which is NodePort (in fact) in case of Minikube
$ k -n test get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.110.22.74 <pending> 80:32710/TCP 5d
Let's run some pod with curl and do some requests to ClusterIP:
$ kubectl -n test run -i --tty tools --image=ellerbrock/alpine-bash-curl-ssl -- bash
bash-4.4$ curl 10.110.22.74 &> /dev/null
bash-4.4$ curl 10.110.22.74 &> /dev/null
bash-4.4$ curl 10.110.22.74 &> /dev/null
bash-4.4$ curl 10.110.22.74 &> /dev/null
Let's check out Nginx logs:
$ k -n test logs web-0
172.17.0.7 - - [18/Apr/2019:23:35:04 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
172.17.0.7 - - [18/Apr/2019:23:35:05 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
172.17.0.7 - - [18/Apr/2019:23:35:17 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
$ k -n test logs web-1
172.17.0.7 - - [18/Apr/2019:23:35:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
172.17.0.7 - is my pod with curl:
NAME READY STATUS RESTARTS AGE IP NODE
tools-654cfc5cdc-8zttt 1/1 Running 1 5d 172.17.0.7 minikube
Actually ClusterIP is totally enough in case of load balancing between StatefulSet's pods, because you have a list of Endpoints
$ k -n test get endpoints
NAME ENDPOINTS AGE
nginx 172.17.0.5:80,172.17.0.6:80 5d
YAMLs:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: web
selector:
app: nginx