Bare-metal k8s ingress with nginx-ingress - kubernetes

I can't apply an ingress configuration.
I need access a jupyter-lab service by it's DNS
http://jupyter-lab.local
It's deployed to a 3 node bare metal k8s cluster
node1.local (master)
node2.local (worker)
node3.local (worker)
Flannel is installed as the Network controller
I've installed nginx ingress for bare metal like this
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
When deployed the jupyter-lab pod is on node2 and the NodePort service responds correctly from http://node2.local:30004 (see below)
I'm expecting that the ingress-nginx controller will expose the ClusterIP service by its DNS name ...... thats what I need, is that wrong?
This is the CIP service, defined with symmetrical ports 8888 to be as simple as possible (is that wrong?)
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
The DNS name jupyter-lab.local resolves to the ip address range of the cluster, but times out with no response. Failed to connect to jupyter-lab.local port 80: No route to host
firewall-cmd --list-all shows that port 80 is open on each node
This is the ingress definition for http into the cluster (any node) on port 80. (is that wrong ?)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io: /
spec:
rules:
- host: jupyter-lab.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-lab-cip
port:
number: 80
This the deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyter-lab-dpt
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jupyter-lab
template:
metadata:
labels:
app: jupyter-lab
spec:
volumes:
- name: jupyter-lab-home
persistentVolumeClaim:
claimName: jupyter-lab-pvc
containers:
- name: jupyter-lab
image: docker.io/jupyter/tensorflow-notebook
ports:
- containerPort: 8888
volumeMounts:
- name: jupyter-lab-home
mountPath: /var/jupyter-lab_home
env:
- name: "JUPYTER_ENABLE_LAB"
value: "yes"
I can successfully access jupyter-lab by its NodePort http://node2:30004 with this definition:
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-nodeport
namespace: default
spec:
type: NodePort
ports:
- port: 10003
targetPort: 8888
nodePort: 30004
selector:
app: jupyter-lab
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
the command kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission returns :
ingress-nginx-controller-admission 10.244.2.4:8443 15m
Am I misconfiguring ports ?
Are my "selector:appname" definitions wrong ?
Am I missing a part
How can I debug what's going on ?
Other details
I was getting this error when applying an ingress kubectl apply -f default-ingress.yml
Error from server (InternalError): error when creating "minnimal-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-contr
oller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": context deadline exceeded
This command kubectl delete validatingwebhookconfigurations --all-namespaces
removed the validating webhook ... was that wrong to do?
I've opened port 8443 on each node in the cluster

Ingress is invalid, try the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jupyter-lab.local
http: # <- removed the -
paths:
- path: /
pathType: Prefix
backend:
service:
# name: jupyter-lab-cip
name: jupyter-lab-nodeport
port:
number: 8888
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
If I understand correctly, you are trying to expose jupyternb through ingress nginx proxy and to make it accessible through port 80.
Run the folllowing command to check what nodeport is used by nginx ingress service:
$ kubectl get svc -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.240.73 <none> 80:30816/TCP,443:31475/TCP 3h30m
In my case that is port 30816 (for http) and 31475 (for https).
Using NodePort type you can only use ports in range 30000-32767 (k8s docs: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). You can change it using kube-apiserver flag --service-node-port-range and then set it to e.g. 80-32767 and then in your ingress-nginx-controller service set nodePort: 80
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.44.0
helm.sh/chart: ingress-nginx-3.23.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 80 # <- HERE
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 443 # <- HERE
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
Although this is genereally not advised to change service-node-port-range since you may encounter some issues if you use ports that are already open on nodes (e.g. port 10250 that is opened by kubelet on every node).
What might be a better solution is to use MetalLB.
EDIT:
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
Assuming you don't need a failure tolerant solution, download the https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml file and change ports: section for the deployment object like following:
ports:
- name: http
containerPort: 80
hostPort: 80 # <- add this line
protocol: TCP
- name: https
containerPort: 443
hostPort: 443 # <- add this line
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
and apply the changes:
kubectl apply -f deploy.yaml
Now run:
$ kubectl get po -n ingress-nginx ingress-nginx-controller-<HERE PLACE YOUR HASH> -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-67897c9494-c7dwj 1/1 Running 0 97s 172.17.0.6 <node_name> <none> <none>
Notice the <node_name> in NODE column. This is a node's name where the pod got scheduled. Now take this nodes IP and add it to your /etc/hosts file.
It should work now (go to http://jupyter-lab.local to check it), but this solution is fragile and if nginx ingress controller pod gets rescheduled to other node it will stop working (and it will stay lik this until you change the ip in /etc/hosts file). It's also generally not advised to use hostPort: field unless you have a very good reason to do so, so don't abuse it.
If you need failure tolerant solution, use MetalLB and create a service of type LoadBalancer for nginx ingress controller.
I haven't tested it but the following should do the job, assuming that you correctly configured MetalLB:
kubectl delete svc -n ingress-nginx ingress-nginx-controller
kubectl expose deployment -n ingress-nginx ingress-nginx-controller --type LoadBalancer

Related

Why not working request to deployment via service request and via ingress request?

Install minikube version: v1.29.0 on MacOs.
I create API endpoint on flask and build in docker image
FROM debian:latest
COPY . /app
WORKDIR /app
RUN pip3 install --no-cache-dir -r requirements.txt
CMD ["uwsgi", "--socket", "0.0.0.0:5001", "--protocol=http", "-w", "wsgi:app", "--ini", "wsgi.ini"]
after load docker image into minikube
minikube image load drnoreg/devops_blog:0.0.1
check minikube
% minikube image ls
docker.io/drnoreg/devops_blog:0.0.1
create deployment, service and ingress yaml
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: devops-blog
spec:
selector:
matchLabels:
run: devops-blog
replicas: 1
template:
metadata:
labels:
run: devops-blog
spec:
containers:
- name: devops-blog
image: docker.io/drnoreg/devops_blog:0.0.1
ports:
- name: pod-port
containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: devops-blog
labels:
run: devops-blog
spec:
type: NodePort
ports:
- name: pod-port
port: 5001
targetPort: 5001
protocol: TCP
nodePort: 30001
selector:
run: devops-blog
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: devops-blog
namespace: devops-blog
spec:
rules:
- host: devops-blog.cluster.local
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: devops-blog
port:
number: 5001
execute create namespace
kubectl create namespace devops-blog
set current namespace
kubectl config set-context --current --namespace=devops-blog
and create deployment, service and ingress
kubectl create -f app.yaml
after try forwarding port for check working flask API
kubectl port-forward devops-blog-f666d8cd7-njp95 5001:5001
Forwarding from 127.0.0.1:5001 -> 5001
Forwarding from [::1]:5001 -> 5001
Handling connection for 5001
Handling connection for 5001
flask API service in minikube is working.
% kubectl get service -n devops-blog -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
devops-blog NodePort 10.99.37.126 <none> 5001:30001/TCP 45s run=devops-blog
% kubectl get pod -n devops-blog -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
devops-blog-f666d8cd7-b9n7j 1/1 Running 0 57s 10.244.0.34 minikube <none> <none>
% kubectl get node -n devops-blog -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane 16h v1.26.1 192.168.49.2 <none> Ubuntu 20.04.5 LTS 5.10.47-linuxkit docker://20.10.23
Now I try to check working API via minikube service
% telnet 192.168.49.2 30001
Trying 192.168.49.2...
not working
add to /etc/hosts
127.0.0.1 devops-blog.cluster.local
try to check working API via ingress minikube
% telnet devops-blog.cluster.local 80
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
not working too.
Why not working request to deployment via service request and via ingress request?
How solve this problem?
In case you did not enable the ingress addon try enable it by executing the following command
$ minikube addons enable ingress
Instead of NodePort service try using the clusterIP service for the app and when you are creating ingress you can give this service as backend like this
service.yaml
apiVersion: v1
kind: Service
metadata:
name: devops-blog
labels:
run: devops-blog
spec:
type: ClusterIP
ports:
- name: pod-port
port: 5001
targetPort: 5001
protocol: TCP
selector:
run: devops-blog
ingres.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: devops-blog
namespace: devops-blog
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false" #Since you are using localhost
spec:
rules:
- host: devops-blog.cluster.local
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: devops-blog
port:
number: 5001
path: /
Once the ingress is generated the IP try opening it in local browser with http://devops-blog.cluster.local/ or curl it like curl $ curl http://devops-blog.cluster.local/.
Note: In case you are deploying this app in the cloud try LoadBalancer as a service.
Try this tutorial as it explained in detail

Ingress fanout gets Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds

I have 2 services. I want to create ingress fanout for them. 1 service runs properly, the other one says
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
I opened all firewall settings. here are some details
bahaddin#b k get ingress -n ingress-nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
fanout-ingress <none> * 35.201.67.49 80 2m57s
bahaddin#bahaddin-ThinkPad-E15-Gen-2:~/projects/personal/exposer/k8s-ingress$ k get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.12.1.185 34.134.200.199 80:30803/TCP,443:32306/TCP 10m
ingress-nginx-controller-admission ClusterIP 10.12.0.83 <none> 443/TCP 10m
web NodePort 10.12.12.129 <none> 8080:30702/TCP 7m43s
web2 NodePort 10.12.5.55 <none> 8080:30160/TCP 7m42s
Here you see that external IP of ingress-nginx-controller (34.134.200.199) is different than fanout-ingress address. but when firstly the fanout-ingress has created , it was the same IP as ingress-nginx-controller (34.134.200.199). I cannot understand why, after some seconds it has gained the newer IP (35.201.67.49). I would be happy to get answer this as well.
The main question is , when I curl http://35.201.67.49/v2/ I get the result properly. But when I curl http://35.201.67.49/web1/hello - which is another service and defined on service, ingress and deployment files below - is not reachable. When I curl 35.201.67.49/web1/hello i got an error
Error: Server Error The server encountered a temporary error and could
not complete your request.
Please try again in 30 seconds.
fanput-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: fanout-ingress
namespace: ingress-nginx
spec:
rules:
- http:
paths:
- path: /web1/*
backend:
serviceName: web
servicePort: 8080
- path: /v2/*
backend:
serviceName: web2
servicePort: 8080
deployment and services
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: ingress-nginx
spec:
selector:
matchLabels:
run: web
template:
metadata:
labels:
run: web
spec:
containers:
- image: bago1/web1:latest
imagePullPolicy: IfNotPresent
name: web
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web2
namespace: ingress-nginx
spec:
selector:
matchLabels:
run: web2
template:
metadata:
labels:
run: web2
spec:
containers:
- image: gcr.io/google-samples/hello-app:2.0
imagePullPolicy: IfNotPresent
name: web2
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: web
namespace: ingress-nginx
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: web2
namespace: ingress-nginx
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web2
type: NodePort
Here

Kubenetes Load Balancer is not accessible

I am trying to host the below (deployment frontend) Kubernetes deployment in the AWS EKS cluster, after deploying deployment and created service and ingress, everything gets successfully deployed and created but when i try to access the Load Balancer DNS from outside then this LoadBalancer is not accessible.
Can someone please point the reason?
**Below code (deployment-2048) is working and Load Balancer is accessible but not in the case of (deployment frontend) **.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-03-09T14:08:45Z"
generation: 2
name: frontend
namespace: default
resourceVersion: "2864"
uid: a7682f3b-dffa-498f-be47-b231cce0720a
spec:
minReadySeconds: 20
progressDeadlineSeconds: 600
replicas: 4
revisionHistoryLimit: 10
selector:
matchLabels:
name: webapp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: webapp
spec:
containers:
- image: kodekloud/webapp-color:v2
imagePullPolicy: IfNotPresent
name: simple-webapp
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 4
observedGeneration: 2
readyReplicas: 4
replicas: 4
updatedReplicas: 4
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: service-1
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
name: webapp
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: ingress-1
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: service-1
servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: game-2048
name: deployment-2048
spec:
selector:
matchLabels:
app.kubernetes.io/name: app-2048
replicas: 5
template:
metadata:
labels:
app.kubernetes.io/name: app-2048
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: app-2048
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: game-2048
name: service-2048
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app.kubernetes.io/name: app-2048
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: game-2048
name: ingress-2048
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: service-2048
servicePort: 80
In your original question (without edits and additional information), in frontend deployment you have port values misconfigured.
Port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
ContainerPort defines the port on which app can be reached out inside the container.
In short story, containerPort from deployment must have the same value as targetPort in Service.
User #herbertgoto had good idea, but unfortunately didn't specify exactly what should be done. When you have changed containerPort from 8080 to 80 it should work but I guess there was issue with repopulating this change in all resources (recreate ingress resource, redeploying pod).
One of the first troubleshooting steps should be to check if your container is listening on the proper port. That's why I requested to $ netstat output.
Useful command to check is to use $ kubectl get ep to check service endpoints.
Note
If you will skip targetPort in service and have only port, Kubernetes automatically assign targetPort value based on port.
When i kept only containerPort and TargetPort to 8080. and the others to 80. Why it worked like this.
Default port port for service is 80. So when you create service with port 80, you don't need to specify any additional things. When you have set port in service to 8080 you also need to specify it.
I've created service with port 8080 no my GKE cluster.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.104.0.1 <none> 443/TCP 153m
my-nginx LoadBalancer 10.104.14.137 34.91.230.207 8080:31311/TCP 9m39s
$ curl 34.91.230.207
curl: (7) Failed to connect to 34.91.230.207 port 80: Connection timed out
$ curl 34.91.230.207:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
...
Responses from Browser:
Only externalIP
ExternalIP:8080
As you see I needed to specify port 8080 in browser and curl command when I didn't use default 80 port.
Conclusions:
Deployment containerPort and Service targetPort must have the same value. When you are using service with port different than 80 you need to specify it by adding :<portNubmer>. That's why in almost all guides in the internet yo can see service port with value 80.
containerPort is set to 8080 and service targetPort is 80

Though external ip is resolved, the website returns connection timedout in kubernetes GKE

I have created a k8s deployment and service yaml for a static website. External IP address is also resolved in kubernetes service. But when I try to access the website through curl or browser, it returns connection timed out.
Dockerfile:
FROM nginx:alpine
COPY . /usr/share/nginx/html
K8s deployment yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ohno-website
labels:
app: ohno-website
spec:
replicas: 1
selector:
matchLabels:
app: ohno-website
template:
metadata:
labels:
app: ohno-website
spec:
containers:
- name: ohno-website
image: gkganeshr/ohno-website:v0.1
imagePullPolicy: Always
ports:
- containerPort: 80
k8s service yml:
apiVersion: v1
kind: Service
metadata:
name: ohno-website
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 9376
selector:
app: ohno-website
ohno_fooserver#cloudshell:~ (fourth-webbing-279817)$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.16.0.1 <none> 443/TCP 8h
ohno-website LoadBalancer 10.16.12.162 34.70.213.174 80:31977/TCP 7h4m
The target port defined in the service defition YAML is incorrect. It should match with container port from pod definition in deployment YAML
targetPort: 9376
should be changed to
targetPort: 80

Unable to access exposed port on kubernetes

I have build a custom tcserver image exposing port 80 8080 and 8443. Basically you have an apache and inside the configuration you have a proxy pass to forward it to the tcserver tomcat.
EXPOSE 80 8080 8443
After that I created a kubernetes yaml to build the pod exposing only port 80.
apiVersion: v1
kind: Pod
metadata:
name: tcserver
namespace: default
spec:
containers:
- name: tcserver
image: tcserver-test:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
And the service along with it.
apiVersion: v1
kind: Service
metadata:
name: tcserver-svc
labels:
app: tcserver
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
selector:
app: tcserver
But the problem is that I'm unable to access it.
If I log to the pod (kubectl exec -it tcserver -- /bin/bash), I'm able to do a curl -k -v http://localhost and it will reply.
I believe I'm doing something wrong with the service, but I don't know what.
Any help will be appreciated.
SVC change
As suggested by sfgroups, I added the targetPort: 80 to the svc, but still not working.
When I try to curl the IP, I get a No route to host
[root#testmaster tcserver]# curl -k -v http://172.30.62.162:30080/
* About to connect() to 172.30.62.162 port 30080 (#0)
* Trying 172.30.62.162...
* No route to host
* Failed connect to 172.30.62.162:30080; No route to host
* Closing connection 0
curl: (7) Failed connect to 172.30.62.162:30080; No route to host
This is the describe from the svc:
[root#testmaster tcserver]# kubectl describe svc tcserver-svc
Name: tcserver-svc
Namespace: default
Labels: app=tcserver
Annotations: <none>
Selector: app=tcserver
Type: NodePort
IP: 172.30.62.162
Port: <unset> 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
When you look at the kubectl describe service output, you'll see it's not actually attached to any pods:
Endpoints: <none>
That's because you say in the service spec that the service will attach to pods labeled with app: tcserver
spec:
selector:
app: tcserver
But, in the pod spec's metadata, you don't specify any labels at all
metadata:
name: tcserver
namespace: default
# labels: {}
And so the fix here is to add to the pod spec the appropriate label
metadata:
labels:
app: tcserver
Also note that it's a little unusual in practice to deploy a bare pod. Usually they're wrapped up in a higher-level controller, most often a deployment, that actually creates the pods. The deployment spec has a template pod spec and it's the pod's labels that matter.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcserver
# Labels here are useful, but the service doesn't look for them
spec:
template:
metadata:
labels:
# These labels are what the service cares about
app: tcserver
spec:
containers: [...]
I see target post is missing, can you add traget port and test?
apiVersion: v1
kind: Service
metadata:
name: tcserver-svc
labels:
app: tcserver
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
targetPort: 80
selector:
app: tcserver