Has anyone had any luck running the Ignite visor in a Kubernetes environment? Should it be run from its own pod? Would I need to open extra ports or configure the ignite service differently? So far I have had no luck, but my experience with Ignite is fairly shallow.
To run Ignite Visor in Kubernetes you need to configure it absolutely the same as simple Ignite nodes, it means that you need to configure DiscoverySpi and CommunicationSpi.
Here is a link to documentation with configuration of Ignite in Kubernetes environment: https://apacheignite.readme.io/docs/kubernetes-deployment
In Kubernetes you have to use only one the same network port for all Apache Ignite instances including Visor, instead of port range, for discovery and communication between instances [1]. This happens because you cannot expose a port range for POD in k8s. Moreover, you have to be sure that instances in the cluster see each other, so you have to use special discovery SPI. By default, if you start Visor in the POD where you have one instance already started, then Visor cannot obtain the same port and uses another from a range, and as a result it doesn't see other nodes in the cluster or see only one node in the POD where it has been started.
If this is the case, then I'd recommend to start a separate POD with the same config but with another CMD, which doesn't start a server node, but runs a sleep loop instead, in order that k8s won't kill the POD. Then you can kubectl exec -ti pod-id -- bash and start the Visor/Sqlline/Control with the same config that you've provided for other instances.
[1] https://apacheignite.readme.io/docs/network-config
Hope it will help.
Related
My application is running within a pod container in a kubernetes cluster. Every time it is started in the container it allocates a random port. I would like to access my application from outside (from another pod or node for example) but since it allocates a random port I cannot create a serivce (NodePort or LoadBalancer) to map the application port to a specific port to be able to access it.
What are the options to handle this case in a kubernetes cluster?
Not supported, checkout the issue here. Even with docker, if your range is overly broad, you can hit issue as well.
One option to solve this would be to use a dynamically configured sidecar proxy that takes care of routing the pods traffic from a fixed port to the dynamic port of the application.
This has the upside that the service can be scaled even if all instances have different ports.
But this also has the potentially big downside that every request has an added overhead due to the extra network step and even if it's just some milliseconds this can have quite an impact depending on how the application works.
Regarding the dynamic configuration of the proxy you could e.g. use shared pod volumes.
Edit:
Containers within a pod are a little bit special. They are able to share predefined volumes and their network namespace.
For shared volumes this means. you are able to define a volume e.g. of type emptyDir and mount it in both containers. The result is that you can share information between both containers by writing them into that specific volume in the first pod and reading the information in the second.
For networking this makes communication between containers of one pod easier because you can use the loopback to communicate with between your containers. In your case this means the sidecar proxy container can call your service by calling localhost:<dynamic-port>.
For further information take a look at the official docs.
So how is this going to help you?
You can use a proxy like envoy and configure it to listen to dynamic configuration changes. The source for the dynamic configuration should be a shared volume between both your application container and the sidecar proxy container.
After your application started and allocated the dynamic port you can now automatically create the configuration for the envoy proxy and save it in the shared volume. The same source envoy is listening to due to the aforementioned configuration.
Envoy itself acts as a reverse proxy and can listen statically on port 8080 (or any other port you like) and routes the incoming network traffic to your application with dynamic port by calling localhost:<dynamic-port>.
Unfortunately I don't have any predefined configuration for this use case but if you need some inspiration you can take a look at istio - they use something quite similar.
As gohm'c mentioned there is no built in way to do this.
As a workaround you can run a script to adjust the service:
Start the application and retrieve the choosen random port from the application
Modify the Kubernetes service, loadbalancer, etc. with the new port
The downside of this is that if the port changes there will be short delay to update the service afterwards.
How about this:
Deploy and start your application in a Kubernetes cluster.
run kubectl exec <pod name here> -- netstat -tulpn | grep "application name" to identify the port number associated with your application.
run kubectl port-forward <pod name here> <port you like>:<random application port> to access the pod of your application.
Would this work for you?
Edit. This is very similar to #Lukas Eichler's response.
I am working on a Kubernetes integration of the database Apache IoTDB which supports a Cluster mode. Currently, to start a cluster each node needs to know the IP adresses of all other nodes in its "ensemble" upfront, before starting.
I think the default approach to this in Kubernetes would generally be to use a StatefulSet and a headless Service. And the startup loop I scratched would be something like this
start each pod with a container which contains a "pre-start" script
In the pre-start script wait until all other pods of the set are started and get all their ip adresses from the headless service)
update the configs / env variables with all cluster ip adresses
start the iotdb instance(s)
So the only question I have is for step 2: When do I know that all pods are started?
Is the update of the DNS / A records of the headless Service atomic in the sense that I see all pods or no pod?
OR do I have to query the API Server separately to see the number of replicas and then wait until I got all their records from the headless service?
Or is there a more kubernetes-like way to achieve that?
Thanks already!
When a DNS address of a headless service is resolved, it returns a list of Pods (ie. IPs) from an underlying endpoint object. The endpoint object always holds the list of Ready pods.
So, you will get the list of Ready pods of that moment on resolving headless service DNS.
I installed and configured 3 node K8S cluster. The worker nodes are windows nodes. We have one .Net application. We want to containerize this application. This application internally using Apache Ignite for the distributed cache.
We build docker image for this application, wrote a deployment file and deployed it in K8S cluster. The deployment will also create a service of “LoadBalancer” type. Using this service we are connecting to the application from the outside world. All is good so far.
Coming to the issue, as we are using Apache Ignite for the distributed cache. One of the POD will be master. We want to always forward the traffic to the POD which is acting as the master node in the Apache Ignite cluster. The Apache Ignite master node identification must be dynamic.
I had gone through the below link. Here the POD configuration is static. We want to dynamically identify the master POD and forward the traffic. What we have to do on the service side.
https://appscode.com/products/voyager/7.4.0/guides/ingress/http/statefulset-pod/
Any help on how to forward the traffic to the POD is greatly appreciated.
The very fact that you have a leader/follower topology, the ask to direct traffic to a said nome (master node) is flawed for a couple of reasons:
What happens when the current leader fails over and there is a new election to select a new leader
The fact that pods are ephemeral they should not have major roles to play in production, instead work with deployments and their replicas. What you are trying to achieve is an anti-pattern
In any case, if this is what you want, may be you want to read about gateways in istio which can be found here
I have minikube running kubernetes inside a virtual box.
one of the docker container it runs is an ignite server.
during my development I try to access the ignite server from outside java client but the discovery fails with all configurations I tried.
is it possible at all?
If yes can someone give an example?
To enable Apache Ignite nodes auto-discovery in Kubernetes, you need to enable TcpDiscoveryKubernetesIpFinder in IgniteConfiguration. Read more about this on https://apacheignite.readme.io/docs/kubernetes-deployment. Your Kubernetes service definitions should have the container exposed port specified, then minikube should give you service URL after successful deployment.
I am trying to run a software in side Kubernetes that open more pods at runtime based on various operations. Is it possible to open more ports on the fly in a Kubernetes pod? It does not seem to be possible at the Docker level (Exposing a port on a live Docker container) which means Kubernetes can't do it either (I guess ?)
Each Pod in Kubernetes gets its own IP address. So a container (application) in a Pod can use any port as long as that port is not used by any other container within the same Pod.
Now if you want to expose those dynamic ports, it will require additional configuration though. Ports are exposed using Services, and service configuration has to be updated to expose those dynamic ports.