Spawning pods dynamically at runtime from another pod - kubernetes

Is it possible for a pod to act like a spawner? When someone calls the api service in the first pod, it should spawn a new pod. This seems like a very simple thing but I cant really figure out where to look in the docs. Someone already mentioned using operators but I dont really see how that would aid me.
Im currenlty migrating a project which uses docker as a spawner to create other dockers. I somehow need this principle to work in kubernetes pods.
Kind regards

Have you looked into Kubespawner part of JupyterHub ?
I have been trying to find alternatives to Kubespawner and Kubernetes Operators might be the answer. https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
GL

Related

Can `oc create` behave in a "transactional"/"atomic" manner when asked to create _multiple_ objects on the cluster?

I have written a number of related OKD object definitions, each in its own YAML file. These together essentially make up an application deployment. I am doing something like the following to install my application on an OKD cluster, which works to my satisfaction when none of the objects already exist [on the cluster]:
oc create -f deploymentconfig.yaml,service.yaml,route.yaml,configmap.yaml,secret.yaml
However, if some of the objects oc create is asked to create, already exist on the cluster, then oc create refuses to re-create them (naturally) but it will have created all the other ones that did not exist.
This isn't ideal when the objects I am creating on the cluster were made to behave "in tandem", and are parts of an application where they depend on one another -- the configuration map, for instance, is pretty much a hard requirement as without it the container will fail to start properly (lacking configuration data through a mounted volume).
I'd like to know, can oc create be made to behave like either all of the objects specified on the command line, are installed, or none if some of them already exist or if there were errors?
I am aware OKD has template faculties and other features that may greatly help with application deployment, so if I am putting too much (misplaced) faith on oc create here, I'll take an alternative solution if oc create by design does not do "transactions". This is just me trying what seems simple from where I currently stand -- not being much of an OKD expert.
Unfortunately, there is no such thing.
In Kubernetes (and so in Openshift), manifests are declarative, but they are declarative by resource.
You can oc apply or oc replace to create or modify some resource in a atomic way, but the same cannot be done with a lot of resources because Kubernetes don't see them as a unity.
Even if you have a Template or a List, some resources may have problems and you will end with a part of the whole.
For this kind of thing helm is much more versatile and works as you want with --atomic flag.

Show more logs in kubernetes dashboard

I am using the official kubernetes dashboard in Version kubernetesui/dashboard:v2.4.0 to manage my cluster and I've noticed that, when I select a pod and look into the logs, the length of the displayed logs is quite short. It's like 50 lines or something?
If an exception occurs, the logs are pretty much useless because the original cause is hidden by lots of other lines. I would have to download the logs or shell to the kubernetes server and use kubectl logs in order to see whats going on.
Is there any way to configure the dashboard in a way so that more lines of logs get displayed?
AFAIK, it is not possible with kubernetesui/dashboard:v2.4.0. On the list of dashboard arguments that allow for customization, there is no option to change the amount of logs displayed.
As a workaround you can use Prometheus + Grafana combination or ELK kibana as separate dashboards with logs/metrics, however depending on the size and scope of your k8s cluster it might be overkill. There are also alternative k8s opensource dashboards such as skooner (formerly known as k8dash), however I am not sure if it offers more workload logs visibility.
If anyone is interested: As the feature that i was looking for does not exist yet, i have submitted a feature request in GitHub. You can see it here: https://github.com/kubernetes/dashboard/issues/6700

create kubernetes java client controller to watch pods

I want to use the kubernetes java client to create a controller (using shared informer) to watch for create, update, and delete events for pods in a specific namespace. I've found some examples that watch deployments and list nodes...but cannot find examples for pods. are there any examples that are available?
Maybe you can try the following link to understand what all things are necessary to write a custom controller and try to write your own with JAVA.
https://developers.redhat.com/blog/2019/10/07/write-a-simple-kubernetes-operator-in-java-using-the-fabric8-kubernetes-client#writing_a_simple_podset_operator_in_java

How to set MaxRevisionTimeoutSeconds in Knative?

I have deployed a service using Cloud run on gke which uses Knative as an abstraction over k8s. The default MaxRevisionTimeoutSeconds is set to 600s in the knative default config but according to this PR this is customizable.
I couldn't find anything in the official Knative documentation, can anybody help me out here?
UPDATE:
After digging a bit more in knative source code and documentation. It looks like that the MaxRevisionTimeoutSeconds is defined in resource=ConfigMap/config-defaults. So have to update it with custom value.
From this it looks like we can use something called as operator to modify the ConfigMap resource but it did not work probably because gcp's does not use operator to install Knative components. Anyways I went on to install the operator and then used resource=knativeserving to overwrite the config-defaults. But this also did not work when I tried re-deploying service.
The next solution is to directly edit the config-defaults using kubectl edit. I even tried doing this but encountered weird behavior. After editing the YAML file when I used kubectl describe to check the changed value, it sometimes shows the modified value, sometimes shows the old value, and sometimes doesn't even show that particular key-value pair in the YAML. Also, it doesn't work when trying to re-deploy the service after doing this edit.
If anyone can help me with this, it would be really great.
MaxRevisionTimeoutSeconds is a cluster-global setting which enforces the max value for TimeoutSeconds on each Revision. This value exists so that cluster administrators can set upper bounds on the amount of time a single HTTP request can be in the system. Knowing an upper bound can be useful when configuring graceful shutdown settings on the HTTP routing components to prevent dropped requests during upgrades.
It's possible that Cloud Run on GKE has overridden these configurations so that they can upgrade the underlying Istio and Knative components on a predictable schedule. (If you have a 10% upgrade budget and it takes 10m to drain a component, your minimum upgrade time is probably around 110m, taking into account additional scheduling / image fetch / startup time.)

Filter Kubernetes API by pod name

I have a Kubernetes cluster running in minikube, I want to filter out all Logstash pods via Kubernetes API. Kubernetes API documentation is a bit confusing, I did some research and found out that I can use something like this, but I have been unsuccessful so far:
localhost:8000/api/v1/namespaces/default/pods?labelSelector=logstash
any ideas how to retrieve this? Any help would be really appreciated.
any ideas how to retrieve this?
Since labels are defined in <name>=<value> pairs you need to supply both, as described in the documentation (see the API section)
As an example, supposing you have:
namepace: default
labels on pods you want to select:
role=ops
application=logstash
kubectl proxy runs on localhost:8000
Then your api call would look like this:
curl localhost:8000/api/v1/namespaces/default/pods?labelSelector=role%3Dops,application%3Dlogstash