how to access service on rpi k8s cluster - kubernetes

I built a k8s cluster with help of this guide: rpi+k8s. I got some basic nginx service up and running and and I can curl from master node to worker node to get the nginx welcome page content using:
k exec nginx-XXX-XXX -it -- curl localhost:80
I tried following suggestions in the following SO posts:
link 1
link 2
However, I still can't access a simple nginx service on the worker node from my local computer (linux). I used, NODE IP:NODE PORT. I also installed kubefwd and ran, sudo kubefwd svc -n nginx-ns but I don't see the expected output where it would show the port forwards. Any help would be appreciated. Thanks.
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-svc NodePort 10.101.19.230 <none> 80:32749/TCP 168m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 3/3 3 3 168m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-54485b444f 3 3 3 168m
And here is the yaml file:
kind: Namespace
apiVersion: v1
metadata:
name: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-ns
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19-alpine
ports:
- name: nginxport
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
selector:
app: backend

You need to update your service nginx-svc where you have used two selector.
remove below part:
selector:
app: backend
Updated service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
Then, Try this one for port-forwarding.
kubectl port-forward -n nginx-ns svc/nginx-svc 8080:80
Template is like this:
kubectl port-forward -n <namespace> svc/<svc_name> <local_port>:<svc_port>
Then try in the browser with 127.0.0.1:8080 or localhost:8080

Related

unable to open nextcloud homepage on kubernetes

I tried to do a simple deployment of nextcloud on a k8s cluster hosted using minikube on my local machine for learning purposes. This deployment doesn't have any database/storage attached to it. I'm simply looking to open the nextcloud homepage on my local machine. However, I am unable to do so. Here are my yamls.
Deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-deployment
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
app: nextcloud
template:
metadata:
labels:
app: nextcloud
spec:
containers:
- name: nextcloud
image: nextcloud:latest
ports:
- containerPort: 80
Service yaml:
apiVersion: v1
kind: Service
metadata:
name: nextcloud-service
spec:
selector:
app: nextcloud
type: LoadBalancer
ports:
- protocol: TCP
port: 8000
targetPort: 80
nodePort: 30000
I can see that it is up and running, however when i navigate to localhost:30000, i see that the page is unavailable. How do i begin to diagnose the issue?
This was the output of kubectl get service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d5h
nextcloud-service LoadBalancer 10.104.40.61 <pending> 8000:30000/TCP 23m
Run minikube service nextcloud-service and it will open it for you.

Though external ip is resolved, the website returns connection timedout in kubernetes GKE

I have created a k8s deployment and service yaml for a static website. External IP address is also resolved in kubernetes service. But when I try to access the website through curl or browser, it returns connection timed out.
Dockerfile:
FROM nginx:alpine
COPY . /usr/share/nginx/html
K8s deployment yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ohno-website
labels:
app: ohno-website
spec:
replicas: 1
selector:
matchLabels:
app: ohno-website
template:
metadata:
labels:
app: ohno-website
spec:
containers:
- name: ohno-website
image: gkganeshr/ohno-website:v0.1
imagePullPolicy: Always
ports:
- containerPort: 80
k8s service yml:
apiVersion: v1
kind: Service
metadata:
name: ohno-website
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 9376
selector:
app: ohno-website
ohno_fooserver#cloudshell:~ (fourth-webbing-279817)$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.16.0.1 <none> 443/TCP 8h
ohno-website LoadBalancer 10.16.12.162 34.70.213.174 80:31977/TCP 7h4m
The target port defined in the service defition YAML is incorrect. It should match with container port from pod definition in deployment YAML
targetPort: 9376
should be changed to
targetPort: 80

why my kubernetes service doesn't target pods

I have a kubernetes service, and I wanna it to target pods.
This is how I define my service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
and this is how I define my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
name: my-pod
labels:
app: myapp
spec:
containers:
- name: httd
image: httpd
imagePullPolicy: Always
ports:
- containerPort: 80
Basically what I did is linking port 80 in the pod to port 77 in the service to port 32766 in the node.
I already know that my container is running on port 80 because If I do this:
docker run -p 8989:80 httpd
and ask for localhost:8989 I can see the page.
If I do kubetctl get services I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46h
my-service NodePort 10.100.157.161 <none> 77:32766/TCP 20m
I tried calling:
10.100.157.161:32766
10.100.157.161:77
But both give connection error.
What did I miss?
Use the below YAMLs. It works
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: my-deployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: httpd
name: httpd
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
master $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m
my-service NodePort 10.110.219.232 <none> 77:32766/TCP 12s
master $
master $
master $ curl 10.110.219.232:77
<html><body><h1>It works!</h1></body></html>
master $
master $
master $ curl $(hostname -i):32766
<html><body><h1>It works!</h1></body></html>
master $

Kubernetes's LoadBalancer yaml not working even though CLI `expose` function works

This is my Service and Deployment yaml that I am running on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-hello-world
labels:
app: node-hello-world
spec:
replicas: 1
selector:
matchLabels:
app: node-hello-world
template:
metadata:
labels:
app: node-hello-world
spec:
containers:
- name: node-hello-world
image: node-hello-world:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: node-hello-world-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
targetPort: 8080
nodePort: 30002
selector:
name: node-hello-world
Results:
$ minikube service node-hello-world-load-balancer --url
http://192.168.99.101:30002
$ curl http://192.168.99.101:30002
curl: (7) Failed to connect to 192.168.99.101 port 30002: Connection refused
However, running the following CLI worked:
$ kubectl expose deployment node-hello-world --type=LoadBalancer
$ minikube service node-hello-world --url
http://192.168.99.101:30130
$ curl http://192.168.99.101:30130
Hello World!
What am I doing wrong with my LoadBalancer yaml config?
you have configured wrong the service selector
selector:
name: node-hello-world
it should be:
selector:
app: node-hello-world
https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
you can debug this by describing the service, and seeing that the endpoint list is empty, so there are no pods mapped to your endpoint's service list
kubectl describe svc node-hello-world-load-balancer | grep -i endpoints
Endpoints: <none>

Service not available in Kubernetes

I have a minikube cluster running locally (v0.17.1), with two deployments: one is a Redis instance and one is a custom app that is trying to connect to the Redis instance. My configuration is more or less copy/pasted from the official docs and the Kubernetes guestbook example.
Service definition and deployment:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller-redis
spec:
replicas: 1
template:
metadata:
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
containers:
- name: poller-redis
image: gcr.io/jmen-1266/jmen-redis:a67b5f4bfd8ea8441ed66a8fcb6596f276017a1c
ports:
- containerPort: 6379
env:
- name: GET_HOSTS_FROM
value: dns
imagePullSecrets:
- name: gcr-json-key
App deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/jmen-1266/poller:a96a452292e894e46339309cc024cac67647cc25
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-json-key
Relevant (I hope) Kubernetes info:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 24d
poller-redis 10.0.0.137 <none> 6379/TCP 20d
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
poller 1 1 1 1 12d
poller-redis 1 1 1 1 4d
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 24d
poller-redis 172.17.0.7:6379 20d
Inside the poller pod (custom app), I get environment variables created for Redis:
# env | grep REDIS
POLLER_REDIS_SERVICE_HOST=10.0.0.137
POLLER_REDIS_SERVICE_PORT=6379
POLLER_REDIS_PORT=tcp://10.0.0.137:6379
POLLER_REDIS_PORT_6379_TCP_ADDR=10.0.0.137
POLLER_REDIS_PORT_6379_TCP_PORT=6379
POLLER_REDIS_PORT_6379_TCP_PROTO=tcp
POLLER_REDIS_PORT_6379_TCP=tcp://10.0.0.137:6379
However, if I try to connect to that port, I cannot. Doing something like:
nc -vz poller-redis 6379
fails.
What I have noticed is that I cannot access the Redis service via its ClusterIP but I can via the IP of the pod running Redis.
Any ideas, please?
Figured this out in the end, it looks like I misunderstood how the service selectors work in Kubernetes.
I have posted that my service definition is:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
The problem is that metadata.labels and spec.selector are different, when they should actually be the same. I still do not exactly understand why this is the case judging by the Kubernetes documentation, but there you have it. Now my service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller-redis
tier: backend
role: database
target: poller
ports:
- port: 6379
targetPort: 6379
I also now use straight up DNS lookup (i.e. ping poller-redis) rather than trying to connect to localhost:6379 from my target pods.
It could be related to kube-dns possibly not running.
From inside the poller pod can you verify that poller-redis resolves?
Does the following work from inside the container?
nc -v 10.0.0.137
One kube-dns service running in kube-system is enough. Did you run nc -vz poller-redis 6379 in pods which have same namespace as redis service?
poller-redis is simplified dns name of resdis service in same namespace. It will do not work in different namespace.
Since kube-dns is unavailable on nodes. So if you want to run nc or redisclient in nodes, please use clusterIP of redis service to replace dns name.