I'm trying to launch an application on GKE and the health checks made by the Ingress always fail.
Here's my full k8s yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tripvector
labels:
app: tripvector
spec:
replicas: 1
minReadySeconds: 60
selector:
matchLabels:
app: tripvector
template:
metadata:
labels:
app: tripvector
spec:
containers:
- name: tripvector
readinessProbe:
httpGet:
port: 3000
path: /healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 11
image: us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector:healthz2
env:
- name: ROOT_URL
value: https://paymahn.tripvector.io/
- name: MAIL_URL
valueFrom:
secretKeyRef:
key: MAIL_URL
name: startup
- name: METEOR_SETTINGS
valueFrom:
secretKeyRef:
key: METEOR_SETTINGS
name: startup
- name: MONGO_URL
valueFrom:
secretKeyRef:
key: MONGO_URL
name: startup
ports:
- containerPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tripvector
spec:
defaultBackend:
service:
name: tripvector-np
port:
number: 60000
---
apiVersion: v1
kind: Service
metadata:
name: tripvector-np
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: tripvector
ports:
- protocol: TCP
port: 60000
targetPort: 3000
This yaml should do the following:
make a deployment with my healthz2 image along with a readiness check at /healthz on port 3000 which is exposed by the image
launch a cluster IP service
launch an ingress
When I check for the status of the service I see it's unhealth:
❯❯❯ gcloud compute backend-services get-health k8s1-07274a01-default-tripvector-np-60000-a912870e --global
---
backend: https://www.googleapis.com/compute/v1/projects/triptastic-1542412229773/zones/us-central1-a/networkEndpointGroups/k8s1-07274a01-default-tripvector-np-60000-a912870e
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/triptastic-1542412229773/zones/us-central1-a/instances/gke-tripvector2-default-pool-78cf58d9-5dgs
ipAddress: 10.12.0.29
port: 3000
kind: compute#backendServiceGroupHealth
It seems that the healthcheck is hitting the right port but this output doesn't confirm if it's hitting the right path. If I look up the health check object in the console I see the following:
Which confirms the GKE health check is hitting the healthz path.
I've verified in the following ways that the health check endpoint I'm using for the readiness probe works but something still isn't working properly:
exec into the pod and run wget
port forward the pod and check /healthz in my browser
port forward the service and check /healthz in my browser
In all three instances above, I can see the /healthz endpoint working. I'll outline each one below.
Here's evidence that running wget from within the pod:
❯❯❯ k exec -it tripvector-65ff4c4dbb-vwvtr /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/tripvector # ls
bundle
/tripvector # wget localhost:3000/healthz
Connecting to localhost:3000 (127.0.0.1:3000)
saving to 'healthz'
healthz 100% |************************************************************************************************************************************************************| 25 0:00:00 ETA
'healthz' saved
/tripvector # cat healthz
[200] Healthcheck passed./tripvector #
Here's what happens when I perform a port forward from the pod to my local machine:
❯❯❯ k port-forward tripvector-65ff4c4dbb-vwvtr 8081:3000
Forwarding from 127.0.0.1:8081 -> 3000
Forwarding from [::1]:8081 -> 3000
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
And here's what happens when I port forward from the Service object:
2:53PM /Users/paymahn/code/tripvector/tripvector ✘ 1 docker ⬆ ✱
❯❯❯ k port-forward svc/tripvector-np 8082:60000
Forwarding from 127.0.0.1:8082 -> 3000
Forwarding from [::1]:8082 -> 3000
Handling connection for 8082
How can I get the healthcheck for the ingress and network endpoint group to succeed so that I can access my pod from the internet?
Related
I have a container running an OPC-server on port 4840. I am trying to configure my microk8s to allow my OPC-Client to connect to the port 4840. Here are examples of my deployment and service:
(No namespace is defined here but they are deployed through azure pipelines and that is where the namespace is defined, the namespace for the deployment and service is "jawcrusher")
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jawcrusher
spec:
replicas: 1
selector:
matchLabels:
app: jawcrusher
strategy: {}
template:
metadata:
labels:
app: jawcrusher
spec:
volumes:
- name: jawcrusher-config
configMap:
name: jawcrusher-config
containers:
- image: XXXmicrok8scontainerregistry.azurecr.io/jawcrusher:#{Version}#
name: jawcrusher
ports:
- containerPort: 4840
volumeMounts:
- name: jawcrusher-config
mountPath: "/jawcrusher/config/config.yml"
subPath: "config.yml"
imagePullSecrets:
- name: acrsecret
service.yml
apiVersion: v1
kind: Service
metadata:
name: jawcrusher-service
spec:
ports:
- name: 7070-4840
port: 7070
protocol: TCP
targetPort: 4840
selector:
app: jawcrusher
type: ClusterIP
status:
loadBalancer: {}
I am using a k8s-client called Lens and in this client there is a functionality to forward local ports to the service. If I do this I can connect to the OPC-Server with my OPC-Client using the url localhost:4840. To me that indicates that the service and deployment is set up correctly.
So now I want to tell microk8s to serve my OPC-Server from port 4840 "externally". So for example if my dns to the server is microk8s.xxxx.internal I would like to connect with my OPC-Client to microk8s.xxxx.internal:4840.
I have followed this tutorial as much as I can: https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/.
It says to update the tcp-configuration for the ingress, this is how it looks after I updated it:
nginx-ingress-tcp-microk8s-conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-tcp-microk8s-conf
namespace: ingress
uid: a32690ac-34d2-4441-a5da-a00ec52d308a
resourceVersion: '7649705'
creationTimestamp: '2023-01-12T14:12:07Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-ingress-tcp-microk8s-conf","namespace":"ingress"}}
managedFields:
- manager: kubectl-client-side-apply
operation: Update
apiVersion: v1
time: '2023-01-12T14:12:07Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
- manager: kubectl-patch
operation: Update
apiVersion: v1
time: '2023-02-14T07:50:30Z'
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:4840: {}
selfLink: /api/v1/namespaces/ingress/configmaps/nginx-ingress-tcp-microk8s-conf
data:
'4840': jawcrusher/jawcrusher-service:7070
binaryData: {}
It also says to update a deployment called ingress-nginx-controller but in microk8s it seems to be a daemonset called nginx-ingress-microk8s-controller. This is what it looks like after adding a new port:
nginx-ingress-microk8s-controller:
spec:
containers:
- name: nginx-ingress-microk8s
image: registry.k8s.io/ingress-nginx/controller:v1.2.0
args:
- /nginx-ingress-controller
- '--configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf'
- >-
--tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- >-
--udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
- '--ingress-class=public'
- ' '
- '--publish-status-address=127.0.0.1'
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
- name: health
hostPort: 10254
containerPort: 10254
protocol: TCP
####THIS IS WHAT I ADDED####
- name: jawcrusher
hostPort: 4840
containerPort: 4840
protocol: TCP
After I have updated the daemonset it restarts all the pods. The port seem to be open, if I run this script is outputs:
Test-NetConnection -ComputerName microk8s.xxxx.internal -Port 4840
ComputerName : microk8s.xxxx.internal
RemoteAddress : 10.161.64.124
RemotePort : 4840
InterfaceAlias : Ethernet 2
SourceAddress : 10.53.226.55
TcpTestSucceeded : True
Before I did the changes it said TcpTestSucceeded: False.
But the OPC-Client cannot connect. It just says:
Could not connect to server: BadCommunicationError.
Does anyone see if I made a mistake somewhere or knows how to do this in microk8s.
Update 1:
I see an error message in the ingress-daemonset-pod logs when I try to connect to the server with my OPC-Client:
2023/02/15 09:57:32 [error] 999#999: *63002 connect() failed (111:
Connection refused) while connecting to upstream, client:
10.53.225.232, server: 0.0.0.0:4840, upstream: "10.1.98.125:4840", bytes from/to client:0/0, bytes from/to upstream:0/0
10.53.225.232 is the client machines ip address and 10.1.98.125 is the ip number of the pod running the OPC-server.
So it seems like it has understood that external port 4840 should be proxied/forwarded to my service which in turn points to the OPC-server-pod. But why do I get an error...
Update 2:
Just to clearify, if I run kubectl port forward command and point to my service it works. But not if I try to connect directly to the 4840 port. So for example this works:
kubectl port-forward service/jawcrusher-service 5000:4840 -n jawcrusher --address='0.0.0.0'
This allows me to connect with my OPC-client to the server on port 5000.
You should simply do a port forwarding from your localhost port x to your service/deployment/pod on port y with kubectl command.
Lets say you have a Nats streaming server in your k8s, it's using tcp over port 4222, your command in that case would be:
kubectl port-forward service/nats 4222:4222
In this case it will forward all traffic on localhost over port 4222 to the service named nats inside your cluster on port 4222. Instead of service you could forward to a specific pod or deployment...
Use kubectl port-forward -h to see all your options...
In case you are using k3d to setup a k3s in docker or rancher desktop you could add the port parameter to your k3d command:
k3d cluster create k3s --registry-create registry:5000 -p 8080:80#loadbalancer -p 4222:4222#server:0
The problem was never with microk8s or the ingress configuration. The problem was that my server was bound to the loopback address (127.0.0.1).
When I changed the configuration so the server listened to 0.0.0.0 instead of 127.0.0.1 it started working.
I have 2 pods and my application is based on a cluster i.e. application synchronizes with another pod to bring it up. Let us say in my example I am using appod1 and appod2 and the synchronization port is 8080.
I want the service for DNS to be resolved for these pod hostnames but I want to block the traffic from outside the apppod1 and appod2.
I can use a readiness probe but then the service doesn't have endpoints and I can't resolve the IP of the 2nd pod. If I can't resolve the IP of the 2nd pod from pod1 then I can't complete the configuration of these pods.
E.g.
App Statefulset definition
app1_sts.yaml
===
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
cluster: appcluster
name: app1
namespace: app
spec:
selector:
matchLabels:
cluster: appcluster
serviceName: app1cluster
template:
metadata:
labels:
cluster: appcluster
spec:
containers:
- name: app1-0
image: localhost/linux:8
imagePullPolicy: Always
securityContext:
privileged: false
command: [/usr/sbin/init]
ports:
- containerPort: 8080
name: appport
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 120
periodSeconds: 30
failureThreshold: 20
env:
- name: container
value: "true"
- name: applist
value: "app2-0"
app2_sts.yaml
====
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
cluster: appcluster
name: app2
namespace: app
spec:
selector:
matchLabels:
cluster: appcluster
serviceName: app2cluster
template:
metadata:
labels:
cluster: appcluster
spec:
containers:
- name: app2-0
image: localhost/linux:8
imagePullPolicy: Always
securityContext:
privileged: false
command: [/usr/sbin/init]
ports:
- containerPort: 8080
name: appport
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 120
periodSeconds: 30
failureThreshold: 20
env:
- name: container
value: "true"
- name: applist
value: "app1-0"
Create Statefulsets and check name resolution
[root#oper01 onprem]# kubectl get all -n app
NAME READY STATUS RESTARTS AGE
pod/app1-0 0/1 Running 0 8s
pod/app2-0 0/1 Running 0 22s
NAME READY AGE
statefulset.apps/app1 0/1 49s
statefulset.apps/app2 0/1 22s
kubectl exec -i -t app1-0 /bin/bash -n app
[root#app1-0 ~]# nslookup app2-0
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find app2-0: NXDOMAIN
[root#app1-0 ~]# nslookup app1-0
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find app1-0: NXDOMAIN
[root#app1-0 ~]#
I understand the behavior of the readiness probe and I am using it as it helps me to make sure service should not resolve to app pods if port 8080 is down. However, I am unable to make out how can I complete the configuration as app pods need to resolve each other and they need their hostname and IPs to configure. DNS resolution can only happen once the service has end points. Is there a better way to handle this situation?
I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP
I've got this webserver config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
and this web service config:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: webserver
I was expecting to be able to connect to the webserver via http://192.168.99.100:80 with this config but Chrome gives me a ERR_CONNECTION_REFUSED.
I tried minikube service --url web-service which gives http://192.168.99.100:30276 however this also has a ERR_CONNECTION_REFUSED.
Any further suggestions?
UPDATE
I updated the port / targetPort to 80.
However, I now get:
ERR_CONNECTION_REFUSED for http://192.168.99.100:80/
and
an nginx 403 for http://192.168.99.100:31540/
In your service, you can define a nodePort
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32700
protocol: TCP
selector:
app: webserver
Now, you will be able to access it on http://:32700
Be careful with port 80. Ideally, you would have an nginx ingress controller running on port 80 and all traffic will be routed through it. Using port 80 as nodePort will mess up your deployment.
In your service, you did not specify a targetPort, so the service is using the port value as targetPort, however your container is listening on 80. Add a targetPort: 80 to the service.
NodePort port range varies from 30000-32767(default). When you expose a service without specifying a port, kubernetes picks up a random port from the above range and provide you.
You can check the port by typing the below command
kubectl get svc
In your case - the application is port forwarded to 31540. Your issues seems to be the niginx configuration. Check for the nginx logs.
Please check permissions of mounted volume /home/docker/vol
To fix this you have to make the mounted directory and its contents publicly readable:
chmod -R o+rX /home/docker/vol
I have 2 pods: a server pod and a client pod (basically the client hits port 8090 to interact with the server). I have created a service (which in turn creates an endpoint) but the client pod cannot reach that endpoint and therefore it crashes:
Error :Error in client :rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp :8090: connect: connection refused")
The client pod tries to access port 8090 in its host network. What I am hoping to do is that whenever the client hits 8090 through the service it connects to the server.
I just cannot understand how I would connect these 2 pods and therefore require help.
server pod:
apiVersion: v1
kind: Pod
metadata:
name: server-pod
labels:
app: grpc-app
spec:
containers:
- name: server-pod
image: image
ports:
- containerPort: 8090
client pod :
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
app: grpc-app
spec:
hostNetwork: true
containers:
- name: client-pod
image: image
Service:
apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: grpc-app
spec:
type: ClusterIP
ports:
- port: 8090
targetPort: 8090
protocol: TCP
selector:
app: grpc-app
Your service is selecting both the client and the server. You should change the labels so that the server should have something like app: grpc-server and the client should have app: grpc-client. The service selector should be app: grpc-server to expose the server pod. Then in your client app, connect to server:8090. You should remove hostNetwork: true.
One thing that i feel is going wrong is that the services are not ready to accept connection and your client is trying to access that therefore getting a connection refused.I faced the similar problem few days back. What i did is added a readiness and liveness probe in the yaml config file.Kubernetes provides liveness and readiness probes that are used to check the health of your containers. These probes can check certain files in your containers, check a TCP socket, or make HTTP requests.
A sample like this
spec:
containers:
- name: imagename
image: image
ports:
- containerPort: 19500
name: http
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 120
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 120
timeoutSeconds: 5
So it will check whether your application is ready to accept connection before redirecting traffic.