I'm kinda new to Kubernets and I think I understand the basics of the whole system but most of the stuff I have read was about how to use kubectl to start a service and deployment and stuff.
But in my use case I have this web API running (built in ASP.net core) that takes a request, does some processing and depending on the input data has to start a secondary process.
A Kubernetes job with restart policy OnFailure seemed to be the way to implement those secondary processes but I can't find any resources on how the web server can be used to start this job.
You can use Kubernetes API to create a Job(or any kubernetes resource) from your application running inside the cluster. You can either install kubectl inside your applications's container and call it from your application code or use a kubernetes client library(https://github.com/kubernetes-client/csharp) to talk to kubernetes API server.
See the following answer for more details:
Kubernetes - Finding out how many replicas there are in a service?
Related
I need to deploy a GPU intensive task on GCP. I want to use a Node.js Docker image and within that container to run a Node.js server that listens to HTTP requests and runs a Python image processing script on-demand (every time that a new HTTP request is received containing the images to be processed). My understanding is that I need to deploy a load balancer in front of the K8s cluster that has a static public IP address which then builds/launches containers every time a new HTTP request comes in? And then destroy the container once processing is completed. Is container re-use not a concern? I never worked with K8s before and I want to understand how it works and after reading the GKE documentation this is how I imagine the architecture. What am I missing here?
runs a Python image processing script on-demand (every time that a new HTTP request is received containing the images to be processed)
This can be solved on Kubernetes, but it is not a very common kind of workload.
The project that support your problem best is Knative with its per-request auto-scaler. Google Cloud Run is the easiest way to use this. But if you want to run this within your own GKE cluster, you can enable it.
That said, you can also design your Node.js service to integrate with the Kubernetes API-server to create Jobs - but it is not a good design to have common workload talk to the API-server. It is better to use Knative or Google Cloud Run.
I'm building a test automation tool that needs to launch a set of tests, collect logs and results. My plan is to build container with necessary dependency for test framework and launch them in Kubernetes.
Is there any application that abstracts complexity of managing the pod lifecycle and provides a simple API to achieve this use-case preferably through API? Basically my test scheduler need to deploy a container in kubernetes, launch a test and collect log files at the end.
I already looked at Knative and kubeless - they seem to be complex and may over-complicate what I'm trying to do here.
Based on information you provided all I can recomend is kubernetes API itself.
You can create a pod with it, wait for it to finish and gather logs. If thats all you need, you don't need any other fancy applications. Here is a list of k8s client libraries.
If you don't want to use client libraries you can always use REST api.
If you are not sure how to use REST api, run kubectl commands with --v=10 flag for debug output where you can see all requests between kubectl and api-server as a reference guide.
Kubernetes also provided detailed documentation for k8s REST api.
Try looking at https://microk8s.io/, it was built for those purposes.
And you can talk to the API server via the rest API same as in every k8s cluster.
I have an application that has 14 different services. Some of the services are dependent on other services. I am trying to find a good way to deploy these in the right sequences without using thread sleeps.
Is there a way to tell kuberentes a service deployment tree like don't deploy service B or service C until Service A is in a container and the status is running?\
Is there s good way to use kubectl to poll service A so I can do a while loop until I know it's up and running then run the scripts to deploy service B and C?
This is not how Kubernetes works. You can kind of shim it with an initContainer that blocks until dependencies are available (usually via kubectl in a while loop, but you get fancy you can try using --wait).
But the expectation is that you set up your applications to be "eventually consistent" when it comes to inter-service dependencies. In practical terms, this usually means just crashing if a dependent service isn't available, and it will just be restarted until things succeed.
You can use the readiness probe to hit health check APIs of your application which is being deployed and in those health check APIs you can test the other service pods availability by hitting their APIs or service
I'm using kubernetes and i would like to set up workers , one of my docker host an API using flask, i have an algorithm in another docker (same pod , i don't know if i should leave it in the same) and other scripts that are also in separated dockers.
i want to link all of these, when i receive a request on the API, call the other dockers depending on the request and get the return.
I don't know how to do that with multiple dockers and so kubernetes.
I'm using RQ library for python to parallelize until now but it was on Heroku without kubernetes (i'm migrating to azure at the moment) and i don't know how it manage it behind.
Thank you.
follow the below reference and setup kubernetes cluster using kubeadm.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
using 'kubeadm join' command you should be able to add worker nodes to the master.
above given link has steps to join the worker to master
If you are using Azure, you can try exploring AKS. It works out of the box. You just need to configure kubectl and you will be good to go.
Regarding deploying multiple microservices(API), you can deploy each microservice as a separate k8s deployment using kubectl and expose them using a service. This way they can communicate with each other using exposed endpoints(API) or a message queue .
Here is a quick guide you can take help from : https://dzone.com/articles/quick-guide-to-microservices-with-kubernetes-sprin
Typically you should use only one container per pod. Multiple containers per pod are possible but are typically used for sidecars, not for additional APIs.
You expose your services using kubernetes services, no need to run everything on a different port if you don't want to.
A minimal setup for typicall webapi calls would look something like this (if you expose your API service as public LoadBalancer you don't necessarily need Ingress)
Client -> (Ingress) -> API service -> API deployment pod(s) -> internal services -> deployment pods.
You can access your internal services from within your cluster using http(s)://servicename[:custom-port]
On the other hand, if you simply use flask to forward API calls to other services, you might want to replace it with an Ingress Controller that does all the routing for you.
I'd like to know if it's possible to get information from a pod when it's just created.
I'm spending time in developing a kubernetes controller process that reacts itself when a pod is created in cluster.
When a pod is just created, the service has to be able to get some basic information from pod. For example, ip, annotations...
I'd like to use a java service.
Any ideas?
You can use kubernetes
api-server
to get information regarding
endpoints (service)
. Kubernetes expose its API via REST so, you can use anything to communicate. Also, verify the results using 'kubectl' tool while development. For example, if you want to monitor pods related to service say, myservice.
kubectl get endpoints <myservice_pod> --watch
This will notify you with any activity with pods related to myservice. IMO, in java you have to use polling mechanism to mimic --watch functionality.
well, if you use kubernetes API client you can just watch on changes for all pods and then get their details (assuming you have granted RBAC auth)