I have 2 pods: a server pod and a client pod (basically the client hits port 8090 to interact with the server). I have created a service (which in turn creates an endpoint) but the client pod cannot reach that endpoint and therefore it crashes:
Error :Error in client :rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp :8090: connect: connection refused")
The client pod tries to access port 8090 in its host network. What I am hoping to do is that whenever the client hits 8090 through the service it connects to the server.
I just cannot understand how I would connect these 2 pods and therefore require help.
server pod:
apiVersion: v1
kind: Pod
metadata:
name: server-pod
labels:
app: grpc-app
spec:
containers:
- name: server-pod
image: image
ports:
- containerPort: 8090
client pod :
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
app: grpc-app
spec:
hostNetwork: true
containers:
- name: client-pod
image: image
Service:
apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: grpc-app
spec:
type: ClusterIP
ports:
- port: 8090
targetPort: 8090
protocol: TCP
selector:
app: grpc-app
Your service is selecting both the client and the server. You should change the labels so that the server should have something like app: grpc-server and the client should have app: grpc-client. The service selector should be app: grpc-server to expose the server pod. Then in your client app, connect to server:8090. You should remove hostNetwork: true.
One thing that i feel is going wrong is that the services are not ready to accept connection and your client is trying to access that therefore getting a connection refused.I faced the similar problem few days back. What i did is added a readiness and liveness probe in the yaml config file.Kubernetes provides liveness and readiness probes that are used to check the health of your containers. These probes can check certain files in your containers, check a TCP socket, or make HTTP requests.
A sample like this
spec:
containers:
- name: imagename
image: image
ports:
- containerPort: 19500
name: http
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 120
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 120
timeoutSeconds: 5
So it will check whether your application is ready to accept connection before redirecting traffic.
Related
I'm trying to launch an application on GKE and the health checks made by the Ingress always fail.
Here's my full k8s yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tripvector
labels:
app: tripvector
spec:
replicas: 1
minReadySeconds: 60
selector:
matchLabels:
app: tripvector
template:
metadata:
labels:
app: tripvector
spec:
containers:
- name: tripvector
readinessProbe:
httpGet:
port: 3000
path: /healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 11
image: us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector:healthz2
env:
- name: ROOT_URL
value: https://paymahn.tripvector.io/
- name: MAIL_URL
valueFrom:
secretKeyRef:
key: MAIL_URL
name: startup
- name: METEOR_SETTINGS
valueFrom:
secretKeyRef:
key: METEOR_SETTINGS
name: startup
- name: MONGO_URL
valueFrom:
secretKeyRef:
key: MONGO_URL
name: startup
ports:
- containerPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tripvector
spec:
defaultBackend:
service:
name: tripvector-np
port:
number: 60000
---
apiVersion: v1
kind: Service
metadata:
name: tripvector-np
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: tripvector
ports:
- protocol: TCP
port: 60000
targetPort: 3000
This yaml should do the following:
make a deployment with my healthz2 image along with a readiness check at /healthz on port 3000 which is exposed by the image
launch a cluster IP service
launch an ingress
When I check for the status of the service I see it's unhealth:
❯❯❯ gcloud compute backend-services get-health k8s1-07274a01-default-tripvector-np-60000-a912870e --global
---
backend: https://www.googleapis.com/compute/v1/projects/triptastic-1542412229773/zones/us-central1-a/networkEndpointGroups/k8s1-07274a01-default-tripvector-np-60000-a912870e
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/triptastic-1542412229773/zones/us-central1-a/instances/gke-tripvector2-default-pool-78cf58d9-5dgs
ipAddress: 10.12.0.29
port: 3000
kind: compute#backendServiceGroupHealth
It seems that the healthcheck is hitting the right port but this output doesn't confirm if it's hitting the right path. If I look up the health check object in the console I see the following:
Which confirms the GKE health check is hitting the healthz path.
I've verified in the following ways that the health check endpoint I'm using for the readiness probe works but something still isn't working properly:
exec into the pod and run wget
port forward the pod and check /healthz in my browser
port forward the service and check /healthz in my browser
In all three instances above, I can see the /healthz endpoint working. I'll outline each one below.
Here's evidence that running wget from within the pod:
❯❯❯ k exec -it tripvector-65ff4c4dbb-vwvtr /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/tripvector # ls
bundle
/tripvector # wget localhost:3000/healthz
Connecting to localhost:3000 (127.0.0.1:3000)
saving to 'healthz'
healthz 100% |************************************************************************************************************************************************************| 25 0:00:00 ETA
'healthz' saved
/tripvector # cat healthz
[200] Healthcheck passed./tripvector #
Here's what happens when I perform a port forward from the pod to my local machine:
❯❯❯ k port-forward tripvector-65ff4c4dbb-vwvtr 8081:3000
Forwarding from 127.0.0.1:8081 -> 3000
Forwarding from [::1]:8081 -> 3000
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
And here's what happens when I port forward from the Service object:
2:53PM /Users/paymahn/code/tripvector/tripvector ✘ 1 docker ⬆ ✱
❯❯❯ k port-forward svc/tripvector-np 8082:60000
Forwarding from 127.0.0.1:8082 -> 3000
Forwarding from [::1]:8082 -> 3000
Handling connection for 8082
How can I get the healthcheck for the ingress and network endpoint group to succeed so that I can access my pod from the internet?
My goal is to expose all my ports under one service.
My pod contains a containerised app that runs under port 80.
This is my attempt to create the deployment:
apiVersion: apps/v1
kind: Deployment
metadata: name: my-deployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
name: my-pod
labels:
app: myapp
spec:
containers:
- name: httd
image: httpd
imagePullPolicy: Always
ports:
- containerPort: 80
However, I am getting error:
error: error parsing deployment.yaml: error converting YAML to JSON: yaml: line 3: mapping values are not allowed in this context
Notes:
If I remove the ports section, the deployment will be created successfully but then the service (that I have it in another file and that I can share if needed), would be able to link a port on the node to a port in the pod because the pod doesn't expose any port (again it just the the container running on a port)
I went through this page, and it does say to use the containerPort so I don't know what I missed
Update
The error was in my deployment file: and after fixing it, I could create both the deployment and the service, but the service is still not exposed on the node. Here is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: front-end
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
As you can see, I am mapping the port 80 in the pod to port 32766 on the node, and when calling: localhost:32766 it returns 404
What did I miss?
Update
This is what the browser shows:
when calling: localhost:32766 it returns 404
this means that the app is actually responding to the request. But you sent the request for an URL that the app has not implemented. 404 Not Found is an Http status code that web servers respond when they don't have a path for the requested URL.
I guess the problem is in your selector section of service.yaml file
try this
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
In addition to what #shubam_asati posted, your service yaml has port: 77 and targetPort: 80. But your deployment container is port is 80. Change port value to the same value as targetPort (ie 80) and you should be able to connect to the app at localhost:<nodePort>.
I've got this webserver config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
and this web service config:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: webserver
I was expecting to be able to connect to the webserver via http://192.168.99.100:80 with this config but Chrome gives me a ERR_CONNECTION_REFUSED.
I tried minikube service --url web-service which gives http://192.168.99.100:30276 however this also has a ERR_CONNECTION_REFUSED.
Any further suggestions?
UPDATE
I updated the port / targetPort to 80.
However, I now get:
ERR_CONNECTION_REFUSED for http://192.168.99.100:80/
and
an nginx 403 for http://192.168.99.100:31540/
In your service, you can define a nodePort
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32700
protocol: TCP
selector:
app: webserver
Now, you will be able to access it on http://:32700
Be careful with port 80. Ideally, you would have an nginx ingress controller running on port 80 and all traffic will be routed through it. Using port 80 as nodePort will mess up your deployment.
In your service, you did not specify a targetPort, so the service is using the port value as targetPort, however your container is listening on 80. Add a targetPort: 80 to the service.
NodePort port range varies from 30000-32767(default). When you expose a service without specifying a port, kubernetes picks up a random port from the above range and provide you.
You can check the port by typing the below command
kubectl get svc
In your case - the application is port forwarded to 31540. Your issues seems to be the niginx configuration. Check for the nginx logs.
Please check permissions of mounted volume /home/docker/vol
To fix this you have to make the mounted directory and its contents publicly readable:
chmod -R o+rX /home/docker/vol
I'm trying to run a simple load balancing server connecting to a Deployment pod.
I've installed Docker for Mac edge version.
The problem is that when I try to make a GET request to the exposed load balancer url http://localhost:8081/api/v1/posts/health, the error appearing is:
org.apache.http.NoHttpResponseException: localhost:8081 failed to respond
When doing:
k get services
I get:
Clearly, the service is running, but localhost:8081 fails to respond, no idea why, I keep struggling with this.
My service resource:
---
apiVersion: v1
kind: Service
metadata:
name: posts-api-svc
# namespace: nginx-ingress
labels:
app: posts-api
rel: beta
env: dev
spec:
type: LoadBalancer
selector:
app: posts-api
rel: beta
env: dev
ports:
- protocol: TCP
port: 8081
My deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-api-deployment
# namespace: nginx-ingress
labels:
app: posts-api
rel: beta
env: dev
spec:
replicas: 1
selector:
matchLabels:
app: posts-api
env: dev
rel: beta
template:
metadata:
labels:
app: posts-api
env: dev
rel: beta
spec:
containers:
- name: posts-api
image: kimgysen/posts-api:latest
ports:
- containerPort: 8083
livenessProbe:
httpGet:
path: /api/v1/posts/health
port: 8083
initialDelaySeconds: 120
timeoutSeconds: 1
Should be a basic setup!
My deployment pod does not show any restarts, everything looks good:
Any advice welcome!
Note:
Edit
When using port 31082, I get the error:
org.apache.http.conn.HttpHostConnectException: Connect to
localhost:31082 [localhost/127.0.0.1] failed: Connection refused
(Connection refused)
There is no specific reason why I used port 8083.
It is because I tried nodeport first (with multiple services), now Load Balancer.
Next step will be ingress, but it didn't really work out for me the first time, and so I try to go step by step.
I used port 8081 instead of port 80 because I read somewhere that on Mac OSX port 80 is only to be used by root user.
The Service port had to correspond to the Deployment containerPort.
I can now access the api on localhost:8083.
I have a situation where I have zero endpoints available for one service. To test this, I specially crafted a yaml descriptor that uses a simple node server to set and retrieve the ready/live status for a pod:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-deployment
labels:
app: nodejs
spec:
replicas: 3
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodejs
image: nodejs_server
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /is_alive
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 10
readinessProbe:
httpGet:
path: /is_ready
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-service
labels:
app: nodejs
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: nodejs
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodejs-ingress
spec:
backend:
serviceName: nodejs-service
servicePort: 80
The node server has methods to set and retrieve the liveness and readiness.
When the app start I can see that 3 replicas are created and the status of them is ready. OK then now I trigger manually the status of their readiness to set to false [from outside the ingress]. One pod is correctly removed from the endpoint so no traffic is routed to it[that's OK as this is the expected behavior]. When I set all the ready-statuses to false for all pods the endpoints list is empty [still the expected behavior].
At that point I cannot set ready=true from outside the ingress as the traffic is not routed to any pod. Is there a way here for example of triggering a restart of the pod when the ready is not achieved after n-timer or n-seconds? Or when the endpoints list is empty?
Well, that is perfectly normal and expected behaviour. What you can do, on the side, is to forward traffic from localhost to a particular pod with kubectl port-forward. That way you can access the pod directly, without ingresses etc. and set it's readiness back to ok. If you want to restart when host it not ready for to long, just use the same endpoint for liveness probe, but trigger it after more tries.