I have a Kubernetes Deployment and Service working, but my Ingress somehow only shows up talking to port 80 no matter what - kubernetes

I have a GKE cluster that I'm working to get going on https load balancing.
So far I have:
deployment
service (x 2 -- see below)
ingress
SSL cert -- google managed version
All of these seem to be working, but I'm getting a 502 error when connecting to the hostname via https:
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
When trying to trace this down I found a debugging post but when combing through it I found that his ingress shows ports 80,443 ... while I can never get mine to show anything but port 80.
This is even after I split my service into two different services, one on port 443 and one on port 80, and now am only telling the ingress about the 443 service and it still shows up with just port 80 and I'm still getting the 502 error.
The YAML for the deployment (asked by the commenter below):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myapp-dev/myapp-container:latest
ports:
- containerPort: 8080
The YAML for the '443 service':
apiVersion: v1
kind: Service
metadata:
name: my-service443
spec:
type: NodePort
selector:
app: myapp
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
And the Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.global-static-ip-name: "kubething"
networking.gke.io/managed-certificates: clearspring-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: my-service443
port:
name: https
I don't understand (a) why the ingress is showing only port 80 and why I'm still getting 502 errors.
Thanks much for any help whatsoever!

It looks like it was missing readiness and liveness probes; when I changed the deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cleardev-deployment
labels:
app: clearspring
spec:
replicas: 2
selector:
matchLabels:
app: clearspring
template:
metadata:
labels:
app: clearspring
spec:
containers:
- name: clearspring
image: gcr.io/clearspring-dev/clearspring-container:latest
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
Then the status changed from UNHEALTHY to Unknown ... but I was still getting the 502 error.
The liveness probe did its job: the procedure was not running on port 8080 on all hosts, just on 127.0.0.1. I fixed that ... still not working but tried EXPOSE 8080 in the Dockerfile and now I guess I need to look at firewall rules because liveness/readiness probes can't connect.
Note that I had to delete and recreate the cluster to get this far ... I think. I just tried first updating the deployment and I didn't get any change from UNHEALTHY.

Related

Access pod from another pod with kubernetes url

I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP

How do I forward port 80 to a service port 3000 using Traefik on K3s?

This is about Traefik not about the way it works with K3s Kubernetes generally, so please don't give me a general K8s answer.
I have a simple k3s deployment and service that looks like this...
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-express
name: app-tier
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 3000
targetPort: 3000
selector:
tier: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-express-deployment
labels:
app: hello-express
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: hello-express
tier: app
spec:
containers:
- name: server
image: partyk1d24/hello-express:latest
ports:
- containerPort: 3000
I can then access the application using the external ip and port 3000. Now I would like to change that port from 3000 to 80. Apparently this is controlled locally on K3s using Traefik. I tried the following while looking here...
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-express-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-tier
port:
number: 80
But when I try to go to the site I get...
curl 192.168.X.XXX
Service Unavailable%
The Blog is a little old so I am sure I am doing something wrong can someone help me identify it?
You should change the port of service to 80.
ports:
- protocol: TCP
port: 80
targetPort: 3000
Keep the target port as 3000.

I can't curl nginx which I deployed on k8s cluster

my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
my service yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
enter image description here
enter image description here
and then I curl 10.104.239.140, but get an error curl: (7) Failed connect to 10.104.239.140:80; Connection timed out
Who can tell me what's wrong?
welcome to SO. That service you've deployed is of type ClusterIP which means it can only be accessed from within the cluster. In your case, it seems you're trying to access it from outside the cluster and thus the connection timed out.
What you can do is, deploy a service of type NodePort or LoadBalancer to access it from outside the cluster. You can read more about different service types here.
You're service would end up something like this:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort ## or LoadBalancer(supported by Cloud providers like AWS)
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30001

GKE NEG Ingress always returns 502 Bad Gateway

I have a StatefulSet, Service with NEG, and Ingress set up on Google Cloud Kubernetes Engine cluster.
Every workload and network object is ready and healthy. Ingress is created and NEG status is updated for all the services. VPC-native (Alias-IP) and HTTP Load Balancer options are enabled for the cluster.
But when I try to access my application using a path specified in my Ingress I always get 502 (Bad Gateway) error.
Here is my configuration (names are redacted including image name):
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: myapp
name: myapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: tcp
selector:
app: myapp
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: myapp
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
serviceName: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
livenessProbe:
httpGet:
path: /
port: tcp
scheme: HTTP
initialDelaySeconds: 60
image: myapp:8bebbaf
ports:
- containerPort: 1880
name: tcp
protocol: TCP
readinessProbe:
failureThreshold: 1
httpGet:
path: /
port: tcp
scheme: HTTP
volumeMounts:
- mountPath: /data
name: data
securityContext:
fsGroup: 1000
terminationGracePeriodSeconds: 10
volumeClaimTemplates:
- metadata:
labels:
app: myapp
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- http:
paths:
- path: /workflow
backend:
serviceName: myapp
servicePort: 80
What's wrong with it and how can I fix it?
After much digging and tests I finally found what's wrong. Also, it seems like GKE NEG Ingress is not very stable (indeed NEG is in beta) and does not always conform to Kubernetes specs.
There was an issue with GKE Ingress related to named ports in targetPort field. The fix is implemented and available from 1.16.0-gke.20 cluster version (Release), which as of today (February 2020) is available under Rapid Channel, but I have not tested the fix as I had other issues with an ingress on a version from this channel.
So basically there are 2 options if you experience the same issue:
Specify exact port number and not port name in a targetPort field in your service. Here is a fixed service config file from my example:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: myapp
name: myapp
spec:
ports:
- port: 80
protocol: TCP
# !!!
# targetPort: tcp
targetPort: 1088
selector:
app: myapp
Upgrade GKE cluster to 1.16.0-gke.20+ version (haven't tested it myself).

kubenetes deployment expose pods on a port

My goal is to expose all my ports under one service.
My pod contains a containerised app that runs under port 80.
This is my attempt to create the deployment:
apiVersion: apps/v1
kind: Deployment
metadata: name: my-deployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
name: my-pod
labels:
app: myapp
spec:
containers:
- name: httd
image: httpd
imagePullPolicy: Always
ports:
- containerPort: 80
However, I am getting error:
error: error parsing deployment.yaml: error converting YAML to JSON: yaml: line 3: mapping values are not allowed in this context
Notes:
If I remove the ports section, the deployment will be created successfully but then the service (that I have it in another file and that I can share if needed), would be able to link a port on the node to a port in the pod because the pod doesn't expose any port (again it just the the container running on a port)
I went through this page, and it does say to use the containerPort so I don't know what I missed
Update
The error was in my deployment file: and after fixing it, I could create both the deployment and the service, but the service is still not exposed on the node. Here is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: front-end
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
As you can see, I am mapping the port 80 in the pod to port 32766 on the node, and when calling: localhost:32766 it returns 404
What did I miss?
Update
This is what the browser shows:
when calling: localhost:32766 it returns 404
this means that the app is actually responding to the request. But you sent the request for an URL that the app has not implemented. 404 Not Found is an Http status code that web servers respond when they don't have a path for the requested URL.
I guess the problem is in your selector section of service.yaml file
try this
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
In addition to what #shubam_asati posted, your service yaml has port: 77 and targetPort: 80. But your deployment container is port is 80. Change port value to the same value as targetPort (ie 80) and you should be able to connect to the app at localhost:<nodePort>.