In a Kubernetes cluster I have a per-node HTTP node-specific service deployed (using a DaemonSet). This service returns node-specific data (which is otherwise not available via the cluster/remote API). I cannot make use of a Kubernetes Service, as this would result in kind of a service roulette, as the client cannot control the exact node to which to connect (forward the HTTP request) to. As the service needs to return node-specific data, this would cause data to be returned for a random node, but not for the node the client wants.
My suspection is that I need a reverse proxy that uses a part of its own URL path to relay an incomming HTTP request deterministically to exactly the node the client indicates. This proxy then in turn could be either accessed by clients using the cluster/remote API service proxy functionality.
http://myservice/node1/.. --> http://node1:myservice/...
http://myservice/node2/... --> http://node2:myservice/...
...
Is there a ready-made pod (or Helm chart) available that maps a service running on all cluster nodes to a single proxy URL, with some path component specifying the node whose service instance to relay to? Is there some way to restrict the reverse proxy to relay only to those nodes being specified in the DaemonSet of the pod spec defining my per-node service?
Additionally, is there some ready-made "hub page" available for the reverse proxy listing/linking to only those nodes were my service is currently running on?
Or is this something where I need to create my own reverse proxy setup specifically? Is there some integration between, e.g. nginx and Kubernetes?
It is almost impossible if you use DaemonSet, because you can't add a unique label for the pod in DaemonSet. If you need to distribute one pod per node you can use podaffinity. with StatefulSet or Deployments.
Then create a service for each node:
kind: Service
apiVersion: v1
metadata:
name: svc-for-node1
spec:
selector:
nodename: unique-label-for-pod-on-node
ports:
- protocol: TCP
port: 80
targetPort: 9376
And final setup Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /svc-for-node1
backend:
serviceName: svc-for-node1
servicePort: 80
- path: /svc-for-node2
backend:
serviceName: svc-for-node2
servicePort: 80
Related
Is it possible to expose the service by ingress with the type of ClusterIP?
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- name: my-service-port
port: 4001
targetPort: 4001
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.example.com
http:
paths:
- path: /my-service
backend:
serviceName: my-service
servicePort: 4001
I know the service can be exposed with the type of NodePort, but it may cost one more NAT connection, if someone could show me what's the fastest way to detect internal service from the world of internet in the cloud.
No, clusterIP is only reachable from within the cluster. An Ingress is essentially just a set of layer 7 forwarding rules, it does not handle the layer 4 requirements of exposing the internals of your cluster to the outside world. At least 1 NAT step is required.
For Ingress to work, though, you need to have at least one service involved that exposes your workload externally, so nodePort or loadBalancer. Your ingress controller and the infrastructure of your cluster will determine which of the two services you will need to use.
In the case of Nginx ingress, you need to have a single LoadBalancer service which the ingress will use to bridge traffic from outside the cluster to inside it. After that, you can use clusterIP services for each of your workloads.
In your above example, as long as the nginx ingress controller is correctly configured (with a loadbalancer), then the config you are using should work fine.
In short : YES
Now to the elaborate answer...
First thing first, let's have a look at what the official documentation says :
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
[...]
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer...
What's confusing here is the term Load balancer. In the definition above, we are talking about the classic and well known in the web load balancer.
This one has nothing to do with kubernetes !
So back to the definition, to use an Ingress and make it work, we need a kubernetes resource called IngressController. And this resource happen to be a load balancer ! That's it.
However, you have to keep in mind that there is a difference between a load balancer in the outside world and a kubernetes service of type type:LoadBalancer.
So in summary (and in order to redirect the traffic from the outside world to your k8s clusterIp service) :
Do you need a Load balancer to make your kind:Ingress works ? Yes, this is the kind:IngressController kubernetes resource.
Do you need a kubernetes service type:LoadBalancer or type:NodePort to make your kind:Ingress works ? Definitely no ! A service type:ClusterIP works just fine !
I have the bare metall kubernetes pod running tomcat application on port 8085. If it would be common server, the app would be accessible via http://<server-ip>:8085/app. My goal is to expose the tomcat on Kubernetes node's address and the same port as used in tomcat.
I am able to expose and access app using Node Port service - but it is inconvenient that port is always different.
I tried to setup traefik ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-tag2
spec:
rules:
- host: kubernetes.example.com #in my conf I use node's domain name
http:
paths:
- path: /test
backend:
serviceName: test-tag2
servicePort: 8085
And I can see result in Traefik's dashboard, but still if I navigate to http://kubernetes.example.com/test/app I get nothing.
I've tried a bunch of ways to configure that and still no luck.
Is it actually possible to expose my pod in this way?
Did you try specifying a nodePort value in the service yaml? If specified, kubernetes will create service on the specified NodePort. If the nodePort is not available , kubernetes doesn't create the service.
Refer to this answer for more details:
https://stackoverflow.com/a/43944385/1237402
I have a React frontend and a node.js backend. Each are in separate containers within separate Pods within the same cluster in k8's.
I want to send data between them without having to use IP addresses. I know Kubernetes has a feature that lets you talk between pods inside the same cluster, and i think its related to the selector label defined within the Service files created.
I have created a ClusterIp service for my React app and another ClusterIp for my server. I have created an ingress file for my application. I know my ingress works as i can access my UI, and i can hit my health check endpoint of my server - so i know they are exposed to the outside world correctly. My problem is how to communicate internally within k8's
Within the my react app i have tried to write
axios.post("/api/test", {
value: "TestValue"
});
But the endpoint within my server of api/test never gets hit with this.
Backend Server Cluster IP - - - -
apiVersion: v1
kind: Service
metadata:
name: server-model-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server-model
ports:
- port: 8050
targetPort: 8050
React UI Cluster IP - - - -
apiVersion: v1
kind: Service
metadata:
name: react-ui-cluster-ip-service
spec:
type: ClusterIP
selector:
component: react-ui
ports:
- port: 3000
targetPort: 3000
Ingress File - - - - -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /api/?(.*)
backend:
serviceName: react-ui-cluster-ip-service
servicePort: 8050
- path: /server/?(.*)
backend:
serviceName: server-model-cluster-ip-service
servicePort: 8050
I understand the Label selector is what maps my React Cluster IP to the Deployment for my UI and similar for my Server Cluster IP to my Deployment for server. I thin i am right in saying i can use the selector somehow to send axis/http requests to other pods like..
axios.post("/PODNAME/api/test", {
value: "TestValue"
});
Could anyone tell me if i am completely wrong or missing something obvious please :)
Here in this part of ingress service name react-ui-cluster-ip-service is there which is running on port 3000 as you mention in service spec file.
But in you are you are sending traffic to proper service name but the port is wrong one.
- path: /api/?(.*)
backend:
serviceName: react-ui-cluster-ip-service
servicePort: 8050
I think due to this you are not able to send request to /api/?
From your service spec file you can also remove type:clusterIP and you can use an only service name to resolve the services inside kubernetes cluster.
answer for your question title: containers with pod can talk on localhost while container within separtate pod can talk over the service name there no need add the service type as clusterIP & Nodeport
I have few concerns here.
As #Harsh Manvar mentioned in his answer, Kubernetes represents mechanism of discovering internal services which guarantees intercommunication between Pods within the same cluster either by IP address or relevant DNS name of service.Therefore you might be able to reach your backend server from particular frontend Pod without involving Ingress, as Ingress controller stays as an edge router and exposes HTTP and HTTPS network traffic from outside the cluster to the corresponded Kubernetes services.
You also used rewrite expression enclosed to your specific path based routing rules within Ingress object. In that scenario the rewrite seems to be resulted in the following way: server-model-cluster-ip-service/api/test rewrites to server-model-cluster-ip-service/test URI and this should be final path placeholder for you backend service. In fact that you are invoking axios.post("server-model-cluster-ip-service/api/test", { value: "TestValue" }) request from React UI Pod might not hit the target backend service.
I just gave some points to consider how to proceed with further troubleshooting, at least you can log into the frontend Pod and check the connectivity to the target backend service accordingly.
The environment: I have a kubernetes cluster set up with namespaces for "dev", "sit" and "prod". In each of these namespaces i have multiple services of type:LoadBalancer which target a specific deployment of a dockerised application (i have multiple applications) so i can access each of these by just using the exposed ip address of the service of whichever namespace i want. Example service looks like this an is very simple:
apiVersion: v1
kind: Service
metadata:
name: application1
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
type: LoadBalancer
selector:
app: application1
The problem: I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc) as to allow the users to migrate to the new version when they are ready and i've been trying to implement path-based routing following this guide. I have managed to restructure my architecture so that i have ReplicationControllers and an ingress which looks at the rules of the path to route to the correct service.
This seems to work if i'd only have one exposed service and a single namespace because i only have DNS host names for production environment and want to use the individual ip address of a service for other environments and i can't figure out how to specify the ingress rules for a service which doesn't have a hostname.
I could just have a loadbalancer for every environment and use path based routing to route to each different services for dev and sit which is not ideal because to access any service we'd have to now use something like this ip/application1 and ip/application2 instead of directly using the service ip address of each application. But my biggest problem is that when i followed the guide and created the ingress, replicationController and a service in my SIT namespace it started affecting the loadbalancer services in my other two environments (as i understand the kubernetes would sometimes try to use the nginx controller from SIT environment on my DEV services and therefore would fail, other times it would use the GCE default configuration and would work).
I tried adding the arg "- --watch-namespace=sit" to limit the scope of the ingress controller to only affect sit but it does not seem to work.
I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc.)
That is exactly what Ingress can do, but the problem is that you want to use IP addresses for routing, but Ingress is using DNS names for that.
I think the best way to implement this is to use an Ingress which will handle requests. On GCE Ingress uses the HTTP(S) load balancer. Yes, you will need a DNS name for that, but it will help you to create a routing which you need.
Also, I highly recommend using TLS encryption for connections.
You can check LetsEncrypt to get a free SSL certificate.
So, the solution should like below:
1. Deploy your Services with type "ClusterIP" instead of "LoadBalancer". You can have more than one Service object for an application so you can do it in parallel with your current configuration.
2. Select any namespace (even special one), for instance - "ingress-ns". We need to create there Service objects which will point to your services in other namespaces. Here is an example of a service (let new DNS name be "my.shiny.new.domain"):
kind: Service
apiVersion: v1
metadata:
name: service-v1
namespace: ingress-ns
spec:
type: ExternalName
externalName: <service>.<namespace>.svc.cluster.local # here is a service name and namespace of your service with version v1.
ports:
- port: 80
3. Now, we have a namespace with several services which are pointing to different versions of your application in different namespaces. Now, we can create an Ingress object which will create an HTTP(S) Load Balancer on GCE with path-based routing:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: ingress-ns
spec:
rules:
- host: my.shiny.new.domain
http:
paths:
- path: /v1
backend:
serviceName: service-v1
servicePort: 80
- path: /v2
backend:
serviceName: service-v2
servicePort: 80
Kubernetes will create a new HTTP(S) balancer with rules you set up in an Ingress object, and you will have an entry point with cross-namespaces path-based routing, and you don't have to use multiple IP addresses for that.
Actually, you can also manage by that ingress your primary version of an application and use your primary domain with "/" path to handle requests to your production version.
I have an application deployed on OpenShift Container Platform v3.6. It consists of multiple services interconnected to each other.
The frontend service calls a time consuming function of the backend service (through a REST call), but after 30 seconds it receives a "504 Gateway Timeout" message. Frontend runs over nginx, but I've already configured it with long proxy send/read timeouts, so the 504 message doesn't come from it. I think it comes from the Service Proxy component of OpenShift Platform, but I can't find out where and how configure a kind of service proxy timeout. I know the existence of HAProxy timeout for external routes, but my services leave in the same cluster application and communicate each other via OpenShift Container Platform DNS.
Could be a Service Proxy timeout issue? How can it be configured?
Thanks!
Your route timeout is the culprit. The haproxy ingress router is terminating the request. You can configure the timeout by following the docs below:
https://docs.openshift.com/container-platform/3.5/install_config/configuring_routing.html
For example:
# Set the timeout on 'longrunningroute' to five minutes.
oc annotate route longrunningroute --overwrite haproxy.router.openshift.io/timeout=5m
In my case I didn't annotate the route myself but added the annotation to the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: my-namespace
annotations:
haproxy.router.openshift.io/timeout: 600s
spec:
tls:
- hosts:
- example.com
secretName: https-tls-secret
rules:
- host: example.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
The routes are managed by the ingress and therefore inherit the annotations from it.