How to allow for tcp service (not http) on custom port inside kubernetes - kubernetes

I have a container running an OPC-server on port 4840. I am trying to configure my microk8s to allow my OPC-Client to connect to the port 4840. Here are examples of my deployment and service:
(No namespace is defined here but they are deployed through azure pipelines and that is where the namespace is defined, the namespace for the deployment and service is "jawcrusher")
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jawcrusher
spec:
replicas: 1
selector:
matchLabels:
app: jawcrusher
strategy: {}
template:
metadata:
labels:
app: jawcrusher
spec:
volumes:
- name: jawcrusher-config
configMap:
name: jawcrusher-config
containers:
- image: XXXmicrok8scontainerregistry.azurecr.io/jawcrusher:#{Version}#
name: jawcrusher
ports:
- containerPort: 4840
volumeMounts:
- name: jawcrusher-config
mountPath: "/jawcrusher/config/config.yml"
subPath: "config.yml"
imagePullSecrets:
- name: acrsecret
service.yml
apiVersion: v1
kind: Service
metadata:
name: jawcrusher-service
spec:
ports:
- name: 7070-4840
port: 7070
protocol: TCP
targetPort: 4840
selector:
app: jawcrusher
type: ClusterIP
status:
loadBalancer: {}
I am using a k8s-client called Lens and in this client there is a functionality to forward local ports to the service. If I do this I can connect to the OPC-Server with my OPC-Client using the url localhost:4840. To me that indicates that the service and deployment is set up correctly.
So now I want to tell microk8s to serve my OPC-Server from port 4840 "externally". So for example if my dns to the server is microk8s.xxxx.internal I would like to connect with my OPC-Client to microk8s.xxxx.internal:4840.
I have followed this tutorial as much as I can: https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/.
It says to update the tcp-configuration for the ingress, this is how it looks after I updated it:
nginx-ingress-tcp-microk8s-conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-tcp-microk8s-conf
namespace: ingress
uid: a32690ac-34d2-4441-a5da-a00ec52d308a
resourceVersion: '7649705'
creationTimestamp: '2023-01-12T14:12:07Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-ingress-tcp-microk8s-conf","namespace":"ingress"}}
managedFields:
- manager: kubectl-client-side-apply
operation: Update
apiVersion: v1
time: '2023-01-12T14:12:07Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
- manager: kubectl-patch
operation: Update
apiVersion: v1
time: '2023-02-14T07:50:30Z'
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:4840: {}
selfLink: /api/v1/namespaces/ingress/configmaps/nginx-ingress-tcp-microk8s-conf
data:
'4840': jawcrusher/jawcrusher-service:7070
binaryData: {}
It also says to update a deployment called ingress-nginx-controller but in microk8s it seems to be a daemonset called nginx-ingress-microk8s-controller. This is what it looks like after adding a new port:
nginx-ingress-microk8s-controller:
spec:
containers:
- name: nginx-ingress-microk8s
image: registry.k8s.io/ingress-nginx/controller:v1.2.0
args:
- /nginx-ingress-controller
- '--configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf'
- >-
--tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- >-
--udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
- '--ingress-class=public'
- ' '
- '--publish-status-address=127.0.0.1'
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
- name: health
hostPort: 10254
containerPort: 10254
protocol: TCP
####THIS IS WHAT I ADDED####
- name: jawcrusher
hostPort: 4840
containerPort: 4840
protocol: TCP
After I have updated the daemonset it restarts all the pods. The port seem to be open, if I run this script is outputs:
Test-NetConnection -ComputerName microk8s.xxxx.internal -Port 4840
ComputerName : microk8s.xxxx.internal
RemoteAddress : 10.161.64.124
RemotePort : 4840
InterfaceAlias : Ethernet 2
SourceAddress : 10.53.226.55
TcpTestSucceeded : True
Before I did the changes it said TcpTestSucceeded: False.
But the OPC-Client cannot connect. It just says:
Could not connect to server: BadCommunicationError.
Does anyone see if I made a mistake somewhere or knows how to do this in microk8s.
Update 1:
I see an error message in the ingress-daemonset-pod logs when I try to connect to the server with my OPC-Client:
2023/02/15 09:57:32 [error] 999#999: *63002 connect() failed (111:
Connection refused) while connecting to upstream, client:
10.53.225.232, server: 0.0.0.0:4840, upstream: "10.1.98.125:4840", bytes from/to client:0/0, bytes from/to upstream:0/0
10.53.225.232 is the client machines ip address and 10.1.98.125 is the ip number of the pod running the OPC-server.
So it seems like it has understood that external port 4840 should be proxied/forwarded to my service which in turn points to the OPC-server-pod. But why do I get an error...
Update 2:
Just to clearify, if I run kubectl port forward command and point to my service it works. But not if I try to connect directly to the 4840 port. So for example this works:
kubectl port-forward service/jawcrusher-service 5000:4840 -n jawcrusher --address='0.0.0.0'
This allows me to connect with my OPC-client to the server on port 5000.

You should simply do a port forwarding from your localhost port x to your service/deployment/pod on port y with kubectl command.
Lets say you have a Nats streaming server in your k8s, it's using tcp over port 4222, your command in that case would be:
kubectl port-forward service/nats 4222:4222
In this case it will forward all traffic on localhost over port 4222 to the service named nats inside your cluster on port 4222. Instead of service you could forward to a specific pod or deployment...
Use kubectl port-forward -h to see all your options...
In case you are using k3d to setup a k3s in docker or rancher desktop you could add the port parameter to your k3d command:
k3d cluster create k3s --registry-create registry:5000 -p 8080:80#loadbalancer -p 4222:4222#server:0

The problem was never with microk8s or the ingress configuration. The problem was that my server was bound to the loopback address (127.0.0.1).
When I changed the configuration so the server listened to 0.0.0.0 instead of 127.0.0.1 it started working.

Related

Tunneling from kubernetes to dev machine via Headless Service and Endpoint

I'm trying to use a headless service with an endpoint to forward traffic from within my cluster to my local development machine. I want to listen on port 80 on the service and call port 5002 on the endpoint. I have it setup as so:
Headless Service (listening on port 80 with a targetPort of 5002):
Endpoint (pointing to my development computer on port 5002):
When I try to curl http://web:80 from any pod in my cluster on port 80 it times out. If I curl http://web:5002 it successfully goes through and hits my development machine. Shouldn't the targetPort make the request to web:80 go to my endpoint on port 5002?
curl web:80
curl web:5002
Some additional info:
My cluster and dev machine are in the same local network
I'm using K3S on the cluster
I'm just trying to emulate what Bridge For Kubernetes does
Here is the manifest yaml:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
clusterIP: None
ports:
- name: web
port: 80
targetPort: 5002
---
apiVersion: v1
kind: Endpoints
metadata:
name: web
namespace: default
subsets:
- addresses:
- ip: $HOST_IP
ports:
- name: web
port: 5002
protocol: TCP
I managed to get it to work by removing the clusterIP: None. My manifest now looks like this:
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: ClusterIP
ports:
- name: web
port: 80
targetPort: 5002
---
apiVersion: v1
kind: Endpoints
metadata:
name: web
subsets:
- addresses:
- ip: $HOST_IP
ports:
- name: web
port: 5002

Access pod from another pod with kubernetes url

I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP

GCP health check failing for kubernetes pod

I'm trying to launch an application on GKE and the health checks made by the Ingress always fail.
Here's my full k8s yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tripvector
labels:
app: tripvector
spec:
replicas: 1
minReadySeconds: 60
selector:
matchLabels:
app: tripvector
template:
metadata:
labels:
app: tripvector
spec:
containers:
- name: tripvector
readinessProbe:
httpGet:
port: 3000
path: /healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 11
image: us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector:healthz2
env:
- name: ROOT_URL
value: https://paymahn.tripvector.io/
- name: MAIL_URL
valueFrom:
secretKeyRef:
key: MAIL_URL
name: startup
- name: METEOR_SETTINGS
valueFrom:
secretKeyRef:
key: METEOR_SETTINGS
name: startup
- name: MONGO_URL
valueFrom:
secretKeyRef:
key: MONGO_URL
name: startup
ports:
- containerPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tripvector
spec:
defaultBackend:
service:
name: tripvector-np
port:
number: 60000
---
apiVersion: v1
kind: Service
metadata:
name: tripvector-np
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: tripvector
ports:
- protocol: TCP
port: 60000
targetPort: 3000
This yaml should do the following:
make a deployment with my healthz2 image along with a readiness check at /healthz on port 3000 which is exposed by the image
launch a cluster IP service
launch an ingress
When I check for the status of the service I see it's unhealth:
❯❯❯ gcloud compute backend-services get-health k8s1-07274a01-default-tripvector-np-60000-a912870e --global
---
backend: https://www.googleapis.com/compute/v1/projects/triptastic-1542412229773/zones/us-central1-a/networkEndpointGroups/k8s1-07274a01-default-tripvector-np-60000-a912870e
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/triptastic-1542412229773/zones/us-central1-a/instances/gke-tripvector2-default-pool-78cf58d9-5dgs
ipAddress: 10.12.0.29
port: 3000
kind: compute#backendServiceGroupHealth
It seems that the healthcheck is hitting the right port but this output doesn't confirm if it's hitting the right path. If I look up the health check object in the console I see the following:
Which confirms the GKE health check is hitting the healthz path.
I've verified in the following ways that the health check endpoint I'm using for the readiness probe works but something still isn't working properly:
exec into the pod and run wget
port forward the pod and check /healthz in my browser
port forward the service and check /healthz in my browser
In all three instances above, I can see the /healthz endpoint working. I'll outline each one below.
Here's evidence that running wget from within the pod:
❯❯❯ k exec -it tripvector-65ff4c4dbb-vwvtr /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/tripvector # ls
bundle
/tripvector # wget localhost:3000/healthz
Connecting to localhost:3000 (127.0.0.1:3000)
saving to 'healthz'
healthz 100% |************************************************************************************************************************************************************| 25 0:00:00 ETA
'healthz' saved
/tripvector # cat healthz
[200] Healthcheck passed./tripvector #
Here's what happens when I perform a port forward from the pod to my local machine:
❯❯❯ k port-forward tripvector-65ff4c4dbb-vwvtr 8081:3000
Forwarding from 127.0.0.1:8081 -> 3000
Forwarding from [::1]:8081 -> 3000
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
And here's what happens when I port forward from the Service object:
2:53PM /Users/paymahn/code/tripvector/tripvector ✘ 1 docker ⬆ ✱
❯❯❯ k port-forward svc/tripvector-np 8082:60000
Forwarding from 127.0.0.1:8082 -> 3000
Forwarding from [::1]:8082 -> 3000
Handling connection for 8082
How can I get the healthcheck for the ingress and network endpoint group to succeed so that I can access my pod from the internet?

Connecting to a Kubernetes service resulted in connection refused

I'm trying to deploy my web application using Kubernetes. I used Minikube to create the cluster and successfully exposed my frontend react app using ingress. Yet when I attached the backend service's URL in "env" field in the frontend's deployment.yaml, it does not work. When I tried to connect to the backend service through the frontend pod, the connection refused.
frontend deployment yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: <image_name>
imagePullPolicy: Never
ports:
- containerPort: 80
env:
- name: REACT_APP_API_V1_ENDPOINT
value: http://backend-svc
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: frontend
Ingress for frontend
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: front-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-read-timeout: "12h"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: front-testk.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-svc
port:
number: 80
Backend deployment yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: backend
labels:
name: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: <image_name>
ports:
- containerPort: 80
imagePullPolicy: Never
restartPolicy: Always
---
kind: Service
apiVersion: v1
metadata:
name: backend-svc
labels:
app: backend
spec:
selector:
app: backend
ports:
- name: http
port: 80
targetPort: 80
% kubectl get svc backend-svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
backend-svc ClusterIP 10.109.107.145 <none> 80/TCP 21h app=backend
Connected inside frontend pod and try to connect to the backend using the ENV created during deploy:
❯ kubectl exec frontend-75579c8499-x766s -it sh
/app # apk update && apk add curl
OK: 10 MiB in 20 packages
/app # env
REACT_APP_API_V1_ENDPOINT=http://backend-svc
/app # curl $REACT_APP_API_V1_ENDPOINT
curl: (7) Failed to connect to backend-svc port 80: Connection refused
/app # nslookup backend-svc
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: backend-svc.default.svc.cluster.local
Address: 10.109.107.145
** server can't find backend-svc.cluster.local: NXDOMAIN
Exec into my backend pod
# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/node
# netstat -lnturp
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
As I suspected your application listens on port 8080. If you closely at your output from netstat you will notice that Local Address is 0.0.0.0:8080:
# netstat -tulpn
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 👉 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/node
In order to fix that you have to correct your targetPort in your service:
kind: Service
apiVersion: v1
metadata:
name: backend-svc
labels:
app: backend
spec:
selector:
app: backend
ports:
- name: http
port: 80
targetPort: 8080 # 👈 change this to 8080
There is no need to change the port on the deployment side since the containerPort is primarily informational. Not specifying a port there does not prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible.

Unable to access service in Kubernetes

I've got this webserver config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
and this web service config:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: webserver
I was expecting to be able to connect to the webserver via http://192.168.99.100:80 with this config but Chrome gives me a ERR_CONNECTION_REFUSED.
I tried minikube service --url web-service which gives http://192.168.99.100:30276 however this also has a ERR_CONNECTION_REFUSED.
Any further suggestions?
UPDATE
I updated the port / targetPort to 80.
However, I now get:
ERR_CONNECTION_REFUSED for http://192.168.99.100:80/
and
an nginx 403 for http://192.168.99.100:31540/
In your service, you can define a nodePort
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32700
protocol: TCP
selector:
app: webserver
Now, you will be able to access it on http://:32700
Be careful with port 80. Ideally, you would have an nginx ingress controller running on port 80 and all traffic will be routed through it. Using port 80 as nodePort will mess up your deployment.
In your service, you did not specify a targetPort, so the service is using the port value as targetPort, however your container is listening on 80. Add a targetPort: 80 to the service.
NodePort port range varies from 30000-32767(default). When you expose a service without specifying a port, kubernetes picks up a random port from the above range and provide you.
You can check the port by typing the below command
kubectl get svc
In your case - the application is port forwarded to 31540. Your issues seems to be the niginx configuration. Check for the nginx logs.
Please check permissions of mounted volume /home/docker/vol
To fix this you have to make the mounted directory and its contents publicly readable:
chmod -R o+rX /home/docker/vol