I have deployed a grpc service running on OpenShift Origin. And this backed by a OpenShift service. And the service is exposed with an OpenShift route. I am trying to make this pod available via a service and route that maps the container port (50051) to outside world on port 8080.
The image that the service is trying to expose has, in its Dockerfile:
EXPOSE 50051
The route has the following:
Service Port: 8080/TCP
Target Port: 50051
In the DeploymentConfig I specify the port with:
ports:
- containerPort: 50051
protocol: TCP
However, when I try to access the application via the route and port, I get (from Java)
java.net.NoRouteToHostException: No route to host
And when I try to telnet the service IP:
telnet 172.30.197.247 8080
I am able to connect.
However, when I try to connect via the route it doesnt work:
telnet my.route.com 8080
Trying ...
telnet: connect to address : Connection refused
When I use:
curl -kv my-svc.myproject.svc.cluster.local:8080
I can connect.
So it seems the service is working but the route is not.
I have been going through the troubleshooting guide on https://docs.openshift.org/3.6/admin_guide/sdn_troubleshooting.html#debugging-the-router
The router setups in OpenShift focus on HTTP/HTTPS(SNI)/TLS(SNI). However it appears that you can use an externalIP to expose non-web application ports from the cluster. Because gRPC is an over the wire protocol, you might need to go this path.
There are multiple things to check :
Is you route point to your service ? Here is a example :
apiVersion: v1
kind: Route
spec:
host: my.route.com
to:
kind: Service
name: yourservice
weight: 100
If it's not the case, the route and the service are not connected.
You can check the router configuration. Connect to your router with oc rsh and check if you find your route name in the /var/lib/haproxy/conf/haproxy.config (the backend name format should be backend be_http_NAMESPACE_ROUTENAME). The server part below the backend part should contains the ip of your pod (you can obtain your pod ip with oc get pods -o wide command).
If it's not the case, the route is not registered in the router config. You can try to restart the router end recheck the haproxy.config file.
Can you connect to the pod ip from the router container ?
Related
I am trying out a possible kubernetes scenario in the local machine minikube cluster. It is to access an internal service that is exposed with ingress in one cluster from another cluster using an ExternalName service. I understand that using an ingress the service will already be accessible within the cluster. As I am trying this out locally using minikube, I am unable to use simultaneously running clusters. Since I just wanted to verify whether it is possible to access an ingress exposed service using ExternName service.
I started the minikube tunnel using minikube tunnel.
I can access the service using http://k8s-yaml-hello.info.
But when I tryout curl k8s-yaml-hello-internal within a running POD, the error that I that is curl: (7) Failed to connect to k8s-yaml-hello-internal port 80 after 1161 ms: Connection refused
Can anyone point me out the issue here? Thanks in advance.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
ingress.yaml
kind: Ingress
metadata:
name: k8s-yaml-hello-ingress
labels:
name: k8s-yaml-hello-ingress
spec:
rules:
- host: k8s-yaml-hello.info
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: k8s-yaml-hello
port:
number: 3000
externalName.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello-internal
spec:
ports:
- name: ''
appProtocol: http
protocol: TCP
port: 3000
type: ExternalName
externalName: k8s-yaml-hello.info
etc/hosts
127.0.0.1 k8s-yaml-hello.info
As You are getting the error curl: (7) Failed to connect :
The above error message means that no web-server is running on the specified IP and Port and the specified (or implied) port.
Check using nano /etc/hosts whether the IP and port is pointing to the correct domain or not. If it's not pointing, provide the correct IP and Port.
Refer to this SO for more information.
In Ingress.Yaml use Port 80 and also in service.yaml port should be 80. The service port and Target port should be different As per your yaml it is the same. Change it to 80 and have a try , If you get any errors, post here.
The problem is that minikube tunnel by default binds to the localhost address 127.0.0.1. Every node, machine, vm, container etc. has its own and the same localhost address. It is to reach local services without having to know the ip address of the network interface (the service is running on "myself"). So when k8s-yaml-hello.info resolves to 127.0.0.1 then it points to different service depending on which container you are (just to myself).
To make it work like you want, you first have to find out the ip address of your hosts network interface e.g. with ifconfig. Its name is something like eth0 or en0, depending on your system.
Then you can use the bind-address option of minikube tunnel to bind to that address instead:
minikube tunnel --bind-address=192.168.1.10
With this your service should be reachable from within the container. Please check first with the ip address:
curl http://192.168.1.10
Then make sure name resolution with /etc/hosts works in your container with dig, nslookup, getent hosts or something similar that is available in your container.
Let's say we have a Kubernetes service which serves both a RESTful HTTP API and a gRPC API:
apiVersion: v1
kind: Service
metadata:
namespace: mynamespace
name: myservice
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
We want to be able to reach those service endpoints externally, for example from another Kubernetes cluster.
This could be achieved by changing the service type from ClusterIP to LoadBalancer. However, let's assume that this is not desirable, for example because it requires additional public IP addresses.
An alternative approach would be to use the apiserver proxy which
connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
This works with the http endpoint. For example, if the http API exposes an endpoint /api/foo, it can be reached like this:
http://myapiserver/api/v1/namespaces/mynamespace/services/myservice:http/proxy/api/foo
Is it somehow possible to also reach the gRPC service via the apiserver proxy? It would seem that since gRPC uses HTTP/2, the apiserver proxy won't support it out of the box. e.g. doing something like this on the client side...
grpc.Dial("myapiserver/api/v1/namespaces/mynamespace/services/myservice:grpc/proxy")
... won't work.
Is there a way to connect to a gRPC service via the apiserver proxy?
If not, is there a different way to connect to the gRPC service from external, without using a LoadBalancer service?
You can use NodePort service. Each of your k8s workers will start listening on some high port. You can connect to any of the workers and your traffic would be routed to the target service.
apiserver-proxy solution looks like workaround to me and is far from production grade solution. You shouldn't route the traffic to your services through k8s API servers (even though it's technically possible). Control plane should be doing just control plane things and not data plane (traffic routing, running workloads, ...)
LoadBalancer service can be typically configured to create Internal LB (with internal IP from your VPC) instead External LB. This frankly the only 'correct' solution.
...not to require an additional public IP
NodePort is not bound to public IP. That is, your worker node can sits in the private network and reachable at the node private IP:nodePort#. The meantime, you can use kubectl port-forward --namespace mynamespace service myservice 8080:8080 and connect thru localhost.
I have a host running HAProxy already. It's been up and running since before I did anything with Kubernetes. It works flawlessly as a reverse proxy and SSL terminator for various web things in various Docker containers on various other host machines.
Now I have a Kubernetes cluster up and running across some of those other machines. I've created the NodePort Service that exposes port 30080 on each worker node, as follows:
apiVersion: v1
kind: Service
metadata:
name: snginx
labels:
app: nginx
spec:
type: NodePort
externalTrafficPolicy: Local #Cluster or Local
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
From the machine running HAProxy (which is not part of the cluster), I can curl the NodePort successfully ( curl 10.0.0.20:30080 ), and I get "Welcome to nginx!..." However, if I set that NodePort as a backend in HAProxy, I get a 503 "No server is available", and HAProxy traffic log says:
localhost haproxy[18489]: [redactedIP]:49399 [30/Aug/2021:19:24:00.764] http-in~ load/load 66/0/-1/-1/67 503 212 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1"
The haproxy admin log says:
Aug 30 20:07:13 localhost haproxy[18839]: Server load/load is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)"
However, I've disabled the firewall with
sudo systemctl disable --now firewalld
and verified the status is not running. Also, SELinux was disabled when I installed the cluster. Also, I can ping 10.0.0.20 just fine.
"load" is the hostname I'm using for testing load balancing (i.e. load.mydomain.com).
Also, if I use PAT on my physical router to route directly to that NodePort, from outside the building, it works as expected.
What gives? What's the difference between the proxied request and curl?
Thank you.
SELinux is the difference. That is, SELinux on the HAProxy host (not a cluster node):
"SELinux only allows the web server to make outbound connections to a limited set of ports"
That is, you can't make an outbound http request to any port in the NodePort range (30000-32768) without opening that port on the "client", which is the HAProxy server in this case.
sudo semanage port --add --type http_port_t --proto tcp 30080
I have a CockroachDB instance running in a Kubernetes cluster on Google Kubernetes Engine. I am trying to expose port 26257 so I can connect to it from my local machine.
As stated in this answer, port forwarding to the pod will not work.
I have an nginx-ingress controller which is used to map from my domain name paths to services, so I tried to use that:
I changed my db-cockroachdb-public service from ClusterIP to NodePort:
type: NodePort
I added these lines to my nginx-controller YAML:
-name: postgresql
nodePort: 30472
port: 26257
protocol: TCP
targetPort: 26257
and these lines to my ingress YAML:
- host: db.mydomain.com
http:
paths:
- path: /
backend:
serviceName: db-cockroachdb-public
servicePort: 26257
However, I'm unable to connect to the database - connection gets refused. I also tried to disable SSL redirects in the nginx controller, but it still doesn't work.
I also tried a ConfigMap but it didn't do anything:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md
There are a few ways to fix this. Most are related to changing your ingress configuration or how you're connecting to the service, which I'm not going to go into. Another option is to make port forwarding work to eliminate the need for the ingress machinery.
You can make port forwarding work by modifying the CockroachDB config file slightly. Change the name of the --host flag in the invocation of the Cockroach binary to be --advertise-host instead. That way, the process will listen on localhost in addition to on its hostname, which will make port forwarding work.
edit: To follow up on this, I've switched the default configuration in the CockroachDB repo to use --advertise-host instead of --host, so port forwarding works by default now.
I don't know if it technically should work to proxy a CockroachDB through a nginx instance, but your setup fails for another reason. When specifying a servicePort in the rules section, you tell k8s which port is exposed to the service. The mapping itself happens by default to port 80/443, not your desired port. So you should try just to ask port 80 in your case.
I have following service configuration:
kind: Service
apiVersion: v1
metadata:
name: web-srv
spec:
type: NodePort
selector:
app: userapp
tier: web
ports:
- protocol: TCP
port: 8090
targetPort: 80
nodePort: 31000
and an nginx container is behind this service. Although I can access to the service via nodePort, service is not accessible via port field. I'm able to see the configs with kubectl and Kubernetes dashboard but curling to that port (e.g. curl http://192.168.0.100:8090) raises a Connection Refused error.
I'm not sure what is the problem here. Do I need to make sure any proxy services is running inside the Node or Container?
Get the IP of the kubernetes service and then hit 8090; it will work.
nodePort implies that the service is bound to the node at port 31000.
These are the 3 things that will work:
curl <node-ip>:<node-port> # curl <node-ip>:31000
curl <service-ip>:<service-port> # curl <svc-ip>:8090
curl <pod-ip>:<target-port> # curl <pod-ip>:80
So now, let's look at 3 situations:
1. You are inside the kubernetes cluster (you are a pod)
<service-ip> and <pod-ip> and <node-ip> will work.
2. You are on the node
<service-ip> and <pod-ip> and <node-ip> will work.
3. You are outside the node
Only <node-ip> will work assuming that <node-ip> is reachable.
The behavior is as expected since I assume you are trying to access the service from outside the cluster. That means only the nodePort exposes the service to the world outside the cluster. The port refers to the port on the pod, as exposed by the container inside the pod. This is generally desired behavior as to support clusters of services that are represented by a loadbalancer typically. So the load balancer will expose the port you want for your service (e.g. load-balancer:80) and forward to the nodePort on all nodes as to distribute the load.
If you accessing the service from inside the cluster you should be able to reach it via service-name:service-port thanks to the built in DNS.
More detailed information can be found at the docs.