How can k8s be aware of service running on host? - kubernetes

I am running Cockpit on my host, its running as systemd service, at the same time I am using microk8s (single node k8s).
I dont need reverse proxy to Cockpit web ui, but I need something like external service/endpoint k8s object mapping, .. to be k8s aware of that service.
k8s should not allow to create another nodePort service with same port ( k8s layer, it should not collide on OS layer )
k8s should handle rules for that service in iptables.
Btw: Cockpit must be running as standalone application directly on host.
Any ideas?
Thanks

Related

Access Kubernetes applications via localhost from the host system

Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi

How to discover services deployed on kubernetes from the outside?

The User Microservice is deployed on kubernetes.
The Order Microservice is not deployed on kubernetes, but registered with Eureka.
My questions:
How can Order Microservice discover and access User Microservice through the Eureka??
First lets take a look at the problem itself:
If you use an overlay network as Kubernetes CNI, the problem is that it creates an isolated Network thats not reachable from the outside (e.g. Flannel). If you have a network like that one solution would be to move the eureka server into kubernetes so eureka can reach the service in Kubernetes and the service outside of Kubernetes.
Another solution would be to tell eureka where it can find the service instead of auto discovery but for that you also need to make the service externally available with a Service of type NodePort, HostPort or LoadBalancer or with an ingress and I'm not sure its possible, but 11.2 in the following doc could be worth a look Eureka Client Discovery.
The third solution would be to use a CNI thats not using an overlay network like Romana which will make the service external routable by default.

load balance an external service in my kubernetes cluster

I have a kubernetes cluster installed.
I want to use an external database ( out of my cluster ) for one of my microservices but this external db is set as a cluster and does not have it's own load balancer.
is there a way to create an internal loadbalancer service that will allow kubernetes to always direct the microservices to the running instance?
I saw that you can set a service of type loadbalancer,how can I use it?
I tried creating it but I saw the loadbalancer service was created with NodePort. can it be used without a NodePort?
many thanks
You cannot, as far as I'm aware, have a Kubernetes service healthcheck an external service and provide loadbalancing to it.
The Service of type=LoadBalancer refers to cloudproviders' LoadBalancers, like ELB for AWS. This automates the process of adding NodePort services to those cloud provider LoadBalancers.
You may be able to get the solution you require using a service mesh like Istio but that's a relatively complex setup.
I would recommend putting something like HAProxy/KeepAliveD of IPVS in front of your database

Communication among pods not working after exposing as service

I have set up a kubernetes Cluster manually. The cluster is healthy. The nodes are up. The pods and services are also created and running.
I have a web pod which is a python flask application. A db-pod which is redis. Exposed redis as a service to be accessible from python. Exposed web pod as external service also. The external service is running in 31727 port.
When i access the web application through browser, it reports redis host is not accessible.
The application works well when deployed in a kubernetes cluster created using kubeadm/kops.
Sounds like kube-proxy or overlay networking issue at first glance. Are you sure kube-proxy is launched on nodes and you have a working overlay ? Can you ping pods directly on a pod-to-pod basis ?
Update: as your pod-to-pod connectivity is down, you need to look into your flannel configuration, and make sure it works fine, as well as make sure pods are started with flannel networking (ie. via CNI) rather than local docker0 interfaces network.

Hitting an endpoint of HeadlessService - Kubernetes

We wanted podnames to be resolved to IP's to configure the seed nodes in an akka cluster. This was happenning by using the concept of a headless service and stateful sets in Kubernetes. But, how do I expose a headless service externally to hit an endpoint from outside?
It is hard to expose a Kubernetes service to the outside, since this would require some complex TCP proxies. The reason for this is, that the headless services is only a DNS record with an IP for each pod. But these IPs are only reachable from within the cluster.
One solution is to expose this via Node ports, which means the ports are opened on the host itself. Unfortunately this makes the service discovery harder, because you don't know which host has a scheduled pod on it.
You can setup node ports via:
the services: https://kubernetes.io/docs/user-guide/services/#type-nodeport
or directly in the Pod by defining spec.containers[].ports[].hostPort
Another alternative is to use a LoadBalancer, if your cloud provider supports that. Unfortunately you cannot address each instance itself, since they share the same IP. This might not be suitable for your application.