I'd like to know if it's possible to get information from a pod when it's just created.
I'm spending time in developing a kubernetes controller process that reacts itself when a pod is created in cluster.
When a pod is just created, the service has to be able to get some basic information from pod. For example, ip, annotations...
I'd like to use a java service.
Any ideas?
You can use kubernetes
api-server
to get information regarding
endpoints (service)
. Kubernetes expose its API via REST so, you can use anything to communicate. Also, verify the results using 'kubectl' tool while development. For example, if you want to monitor pods related to service say, myservice.
kubectl get endpoints <myservice_pod> --watch
This will notify you with any activity with pods related to myservice. IMO, in java you have to use polling mechanism to mimic --watch functionality.
well, if you use kubernetes API client you can just watch on changes for all pods and then get their details (assuming you have granted RBAC auth)
Related
Among a big stack of orchestrated k8s pods, I have following two pods of interest:
Elasticsearch pod attached to a PV
A tomcat based application pod that serves as administrator for all other pods
I want to be able to query and display very minimal/basic disk availability and usage statistics of the PV (attached to pod #1) on the app running in pod #2
Can this be achieved without having to run a web-server inside my ES pod? Since ES might be very loaded, I prefer not to add a web-server to it.
The PV attached to ES pod also holds the logs. So I want to avoid any log-extraction-based solution to achieve getting this information over to pod #2.
You need get the PV details from kubernetes cluster API, where ever you are.
Accessing the Kubernetes cluster API from within a Pod
When accessing the API from within a Pod, locating and authenticating to the API server are slightly different to the external client case described above.
The easiest way to use the Kubernetes API from a Pod is to use one of the official client libraries. These libraries can automatically discover the API server and authenticate.
Using Official Client Libraries
From within a Pod, the recommended ways to connect to the Kubernetes API are:
For a Go client, use the official Go client library. The rest.InClusterConfig() function handles API host discovery and authentication automatically. See an example here.
For a Python client, use the official Python client library. The config.load_incluster_config() function handles API host discovery and authentication automatically. See an example here.
There are a number of other libraries available, please refer to the Client Libraries page.
In each case, the service account credentials of the Pod are used to communicate securely with the API server.
Reference
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-within-a-pod
If I have a service, pod etc. that I can query using a selector changes its ip address, is there a way to get notified?
For example, say my application needs to have a list of ip addresses of a pod, or the ip address of a service. Since the container can go down and get recreated using kubernetes, is there a way to get notified when the containers go down and get recreated so I can then use the kubernetes API to get the latest values for the ip addresses?
This would be required for things like primary and slave databases etc.
Does kubernetes have a webhook type functionality that can be used to notify my app?
You can use watch API operations.
To watch all Endpoints objects:
GET /api/v1/namespaces/{namespace}/endpoints?watch=true
To watch a specific Endpoints object:
GET /api/v1/watch/namespaces/{namespace}/endpoints/{name}?watch=true
This creates a hanging HTTP GET request and you get notified whenever any of the watched objects changes.
See the Kubernetes API reference.
There is nothing out of the box. You would have to write a controller which can watch and get notified for change of a resource in kubernetes cluster ETCD store. The endpoint controller within kubernetes is an example of that because it updates the Endpoints object whenever IP of a pod behind a service changes.
Another example is ingress controllers which watches for any change in the Endpoints which holds the Pod IPs behind a service.
The watch API in the standard kubernetes client libraries is pretty efficient and widely used.
Is there a reason why you would need to get IPs of Pods rather than deploying a StatefulSet and work with stable SRV records ?
StatefulSet looks a better approach to get stable identities. Master-Slaves topologies are typical use cases for StatefulSets.
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
I'm kinda new to Kubernets and I think I understand the basics of the whole system but most of the stuff I have read was about how to use kubectl to start a service and deployment and stuff.
But in my use case I have this web API running (built in ASP.net core) that takes a request, does some processing and depending on the input data has to start a secondary process.
A Kubernetes job with restart policy OnFailure seemed to be the way to implement those secondary processes but I can't find any resources on how the web server can be used to start this job.
You can use Kubernetes API to create a Job(or any kubernetes resource) from your application running inside the cluster. You can either install kubectl inside your applications's container and call it from your application code or use a kubernetes client library(https://github.com/kubernetes-client/csharp) to talk to kubernetes API server.
See the following answer for more details:
Kubernetes - Finding out how many replicas there are in a service?
I have a use case in which front-end application in sending a file to back-end service for processing. And at a time only one request can be processed by backend service pod. And if multiple request came service should autoscale and send that request to new Pod.
So I am finding a way in which I can spawn a new POD against each request and after completion of processing by backend service pod it will return the result to front-end service and destroy itself.
So that each pod only process a single request at a time.
I explore the HPA autoscaling but did not find any suitable way.
Open to use any custom metric server for that, even can use Jobs if they are able to fulfill the above scenario.
So if someone have knowledge or tackle the same use case then help me so that I can also try that solution.
Thanks in advance.
There's not really anything built-in for this that I can think of. You could create a service account for your app that has permissions to create pods, and then build the spawning behavior into your app code directly. If you can get metrics about which pods are available, you could use HPA with Prometheus to ensure there is always at least one unoccupied backend, but that depends on what kind of metrics your stuff exposes.
As already said, there is no built in way for doing this , you need to find custom way to achive this.
One solution can be use of service account and http request to api server to create back end pod as soon as your service is received by front end pod, check status of back end pod and once it is up, forward request to back end.
Second way i can think of using some temp storage ( db or hostpath volume ) and write cronejob in your master to poll that storage and depending on status spawn pod having job container.
I have a service created as a headless service that is intended to map to a range of external IP addresses provided by a separate k8s endpoints object. If one of the external nodes were to die, is there any way for me to remove the specific endpoint from the service automatically?
You can use kubectl patch to edit whatever object you want.
Since it's an external IP and Kubernetes is therefore not aware of it, you will need to provide the mechanism to automate the deletion, like using a job you run periodically or some sort of callback.
I'm thinking of deploying simple haproxy pods with configuration taken either from configmap (list of IPs) or directly from the other external service, to be able to add healthchecks. Config change might also be automated by confd inside this haproxy container. And these haproxy pods would be exposed as a Service in Kubernetes to the other apps.