Kubernetes's LoadBalancer yaml not working even though CLI `expose` function works - kubernetes

This is my Service and Deployment yaml that I am running on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-hello-world
labels:
app: node-hello-world
spec:
replicas: 1
selector:
matchLabels:
app: node-hello-world
template:
metadata:
labels:
app: node-hello-world
spec:
containers:
- name: node-hello-world
image: node-hello-world:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: node-hello-world-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
targetPort: 8080
nodePort: 30002
selector:
name: node-hello-world
Results:
$ minikube service node-hello-world-load-balancer --url
http://192.168.99.101:30002
$ curl http://192.168.99.101:30002
curl: (7) Failed to connect to 192.168.99.101 port 30002: Connection refused
However, running the following CLI worked:
$ kubectl expose deployment node-hello-world --type=LoadBalancer
$ minikube service node-hello-world --url
http://192.168.99.101:30130
$ curl http://192.168.99.101:30130
Hello World!
What am I doing wrong with my LoadBalancer yaml config?

you have configured wrong the service selector
selector:
name: node-hello-world
it should be:
selector:
app: node-hello-world
https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
you can debug this by describing the service, and seeing that the endpoint list is empty, so there are no pods mapped to your endpoint's service list
kubectl describe svc node-hello-world-load-balancer | grep -i endpoints
Endpoints: <none>

Related

getting 502 Bad Gateway on eks aws-alb-ingress

I created in AWS a EKS Cluster via Terraform using terraform-aws-modules/eks/aws as module. This cluster has one pod (golang app) using nodeport as service and ingress. The problem I have is that I'm getting 502 bad gateway when I hit the endpoint.
My config:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: golang-deployment
labels:
app: golang-app
spec:
replicas: 1
selector:
matchLabels:
name: golang-app
template:
metadata:
labels:
name: golang-app
spec:
containers:
- name: golang-app
image: 019496914213.dkr.ecr.eu-north-1.amazonaws.com/goland:1.0
ports:
- containerPort: 9000
service:
kind: Service
apiVersion: v1
metadata:
name: golang-service
spec:
type: NodePort
selector:
app: golang-app
ports:
- protocol: TCP
port: 9000
targetPort: 9000
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
labels:
app: app
spec:
rules:
- http:
paths:
- path: /api/v2
pathType: ImplementationSpecific
backend:
service:
name: golang-service
port:
number: 9000
kubectl get service
golang-service NodePort 172.20.44.34 <none> 9000:32184/TCP 106m
The security groups for the cluster and nodes were created by terraform-aws-modules/eks/aws module.
I checked severals things:
kubectl port-forward golang-deployment-5894d8d6fc-ktmmb 9000:9000
WORKS! I can see the golang app using localhost:9000 in my computer
kubectl exec curl -i --tty nslookup golang-app
Server: 172.20.0.10
Address 1: 172.20.0.10 kube-dns.kube-system.svc.cluster.local
Name: golang-app
Address 1: 172.20.130.130 golang-app.default.svc.cluster.local
WORKS!
kubectl exec curl -i --tty curl golang-app:9000
curl: (7) Failed to connect to golang-app port 9000: Connection refused
NOT WORKS
Any idea?
You should be calling the service not deployment.
golang-service is svc name instead of deployment name
kubectl exec curl -i --tty curl golang-service:9000

Why can't I curl endpoint on GCP?

I am working my way through a kubernetes tutorial using GKE, but it was written with Azure in mind - tho it has been working ok so far.
The first part where it has not worked has been with exercises regarding coreDNS - which I understand does not exist on GKE - it's kubedns only?
Is this why I can't get a pod endpoint with:
export PODIP=$(kubectl get endpoints hello-world-clusterip -o jsonpath='{ .subsets[].addresses[].ip}')
and then curl:
curl http://$PODIP:8080
My deployment is definitely on the right port:
ports:
- containerPort: 8080
And, in fact, the deployment for the tut is from a google sample.
Is this to do with coreDNS or authorisation/needing a service account? What can I do to make the curl request work?
Deployment yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
ports:
- port: 80
protocol: TCP
targetPort: 8080
Having a deeper insight on what Gari comments, when exposing a service outside your cluster, this services must be configured as NodePort or LoadBalancer, since ClusterIP only exposes the Service on a cluster-internal IP making the service only reachable from within the cluster, and since Cloud Shell is a a shell environment for managing resources hosted on Google Cloud, and not part of the cluster, that's why you're not getting any response. To change this, you can change your yaml file with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
After redeploying your service, you can run command kubectl get all -o wide on cloud shell to validate that NodePort type service has been created with a node and target port.
To test your deployment just throw a CURL test to he external IP from one of your nodes incluiding the node port that was assigned, the command should look like something like:
curl <node_IP_address>:<Node_port>

how to access service on rpi k8s cluster

I built a k8s cluster with help of this guide: rpi+k8s. I got some basic nginx service up and running and and I can curl from master node to worker node to get the nginx welcome page content using:
k exec nginx-XXX-XXX -it -- curl localhost:80
I tried following suggestions in the following SO posts:
link 1
link 2
However, I still can't access a simple nginx service on the worker node from my local computer (linux). I used, NODE IP:NODE PORT. I also installed kubefwd and ran, sudo kubefwd svc -n nginx-ns but I don't see the expected output where it would show the port forwards. Any help would be appreciated. Thanks.
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-svc NodePort 10.101.19.230 <none> 80:32749/TCP 168m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 3/3 3 3 168m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-54485b444f 3 3 3 168m
And here is the yaml file:
kind: Namespace
apiVersion: v1
metadata:
name: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-ns
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19-alpine
ports:
- name: nginxport
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
selector:
app: backend
You need to update your service nginx-svc where you have used two selector.
remove below part:
selector:
app: backend
Updated service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
Then, Try this one for port-forwarding.
kubectl port-forward -n nginx-ns svc/nginx-svc 8080:80
Template is like this:
kubectl port-forward -n <namespace> svc/<svc_name> <local_port>:<svc_port>
Then try in the browser with 127.0.0.1:8080 or localhost:8080

Connection refused when connecting to svc from another pod

I have simple python application
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["gunicorn", "main:api", "-c", "/app/gunicorn_conf.py" ,"--reload"]
where my gunicorn conf file:
import multiprocessing
bind = '0.0.0.0:8000'
workers = multiprocessing.cpu_count() * 2 + 1
timeout = 30
worker_connections = 1000
and my deployment YAML file:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: test-deployment
spec:
selector:
matchLabels:
app: test-pod
replicas: 1
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-container
image: localhost:5000/test123:v4
ports:
- name: http
containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: test-svc
spec:
ports:
- protocol: TCP
port: 5000
targetPort: http
selector:
app: test-pod
When I try curl command from test-pod its works file
curl -X POST localhost:8000/test
but when I try curl from other pods, I get connection refused error. (busybox pod)
curl -X POST test-svc:5000/test
My kubectl describe svc test-svc
Name: test-svc
Namespace: default
Labels: <none>
Annotations: Selector: app=test-pod
Type: ClusterIP
IP: 10.99.11.154
Port: <unset> 5000/TCP
TargetPort: http/TCP
Endpoints: 10.1.0.11:8000
Session Affinity: None
Events: <none>
nslookup works fine
Name: test-svc.default.svc.cluster.local
Address: 10.99.11.154

Converting docker run to YAML for Kubernetes

I'm new to Kubernetes. I'm trying to convert the following DOCKER container code to YAML for kubernetes.
docker container run -d -p 80:80 --name MyFirstContainerWeb docker/getting-started:pwd
This is what I have come up with so far. Can someone please help me with the ingress part? I'm using Docker Desktop (which has kubernetes cluster). My final goal is to see the website in the browser.
apiVersion: apps/v1
kind: Deployment
metadata:
name: getting-started-deployment
spec:
selector:
matchLabels:
app: getting-started
replicas: 2
template:
metadata:
labels:
app: getting-started
spec:
containers:
- name: getting-started-container
image: docker/getting-started:pwd
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: getting-started-service
namespace: default
labels:
app: myfirstcontainer
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: getting-started-service
You can use port-forward to forward to service port by running
$ kubectl port-forward svc/getting-started-service 80
To learn more about port-forwarding click here