I'm using kubernetes and i would like to set up workers , one of my docker host an API using flask, i have an algorithm in another docker (same pod , i don't know if i should leave it in the same) and other scripts that are also in separated dockers.
i want to link all of these, when i receive a request on the API, call the other dockers depending on the request and get the return.
I don't know how to do that with multiple dockers and so kubernetes.
I'm using RQ library for python to parallelize until now but it was on Heroku without kubernetes (i'm migrating to azure at the moment) and i don't know how it manage it behind.
Thank you.
follow the below reference and setup kubernetes cluster using kubeadm.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
using 'kubeadm join' command you should be able to add worker nodes to the master.
above given link has steps to join the worker to master
If you are using Azure, you can try exploring AKS. It works out of the box. You just need to configure kubectl and you will be good to go.
Regarding deploying multiple microservices(API), you can deploy each microservice as a separate k8s deployment using kubectl and expose them using a service. This way they can communicate with each other using exposed endpoints(API) or a message queue .
Here is a quick guide you can take help from : https://dzone.com/articles/quick-guide-to-microservices-with-kubernetes-sprin
Typically you should use only one container per pod. Multiple containers per pod are possible but are typically used for sidecars, not for additional APIs.
You expose your services using kubernetes services, no need to run everything on a different port if you don't want to.
A minimal setup for typicall webapi calls would look something like this (if you expose your API service as public LoadBalancer you don't necessarily need Ingress)
Client -> (Ingress) -> API service -> API deployment pod(s) -> internal services -> deployment pods.
You can access your internal services from within your cluster using http(s)://servicename[:custom-port]
On the other hand, if you simply use flask to forward API calls to other services, you might want to replace it with an Ingress Controller that does all the routing for you.
Related
I'm kinda new to Kubernets and I think I understand the basics of the whole system but most of the stuff I have read was about how to use kubectl to start a service and deployment and stuff.
But in my use case I have this web API running (built in ASP.net core) that takes a request, does some processing and depending on the input data has to start a secondary process.
A Kubernetes job with restart policy OnFailure seemed to be the way to implement those secondary processes but I can't find any resources on how the web server can be used to start this job.
You can use Kubernetes API to create a Job(or any kubernetes resource) from your application running inside the cluster. You can either install kubectl inside your applications's container and call it from your application code or use a kubernetes client library(https://github.com/kubernetes-client/csharp) to talk to kubernetes API server.
See the following answer for more details:
Kubernetes - Finding out how many replicas there are in a service?
We are now on our journey to break our monolith (on-prem pkg (rpm/ova)) into services (dockers).
In the process we are evaluation envoy/istio as our communication and security layer, it looks great when running as sidecar in k8s, or each service on a separate machie.
As we are going to deliver several services within one machine, and can't deliver it within k8s, I'm not sure if we can use envoy, I didn't find any reference on using envoy in additional ways, are there additional deployment methods I can use to enjoy it?
You can run part of your services on Kubernetes and part on VMs.
Long time I did not come here and I hope you're fine :)
So for now, i have the pleasure of working with kubernetes ! So let's start ! :)
[THE EXISTING]
I have an operationnal kubernetes cluster with which I work every day.it consists of several applications, one of which is of particular interest to us, which is the web management interface.
I currently own one master and four nodes in my cluster.
For my web application, pod contain 3 containers : web / mongo /filebeat, and for technical reasons, we decided to assign 5 users max for each web pod.
[WHAT I WANT]
I want to deploy a web pod on each nodes (web0,web1,web2,web3), what I can already do, and that each session (1 session = 1 user) is distributed as follows:
For now, all HTTP requests are processed by web0.
[QUESTIONS]
Am I forced to go through an external loadbalancer (haproxy)?
Can I use an internal loadbalancer, configuring a service?
Does anyone have experience on the implementation described above?
I thank in advance those who can help me in this process :)
This generally depends how and where you've deployed your Kubernetes infrastructure, but you can do this natively with a few options.
Firstly, you'll need to scale your web deployment. This is very simple to do:
kubectl scale --current-replicas=2 --replicas=3 deployment/web
If you're deployed into a cloud provider (such as AWS using kops, or GKE) you can use a service. Just specify the type as LoadBalancer. Services will spread the sessions for your users.
Another option is to use an Ingress. In order to do this, you'll need to use an Ingress Controller, such as the nginx-ingress-controller which is the most featureful and widely deployed.
Both of these options will automatically loadbalance your incoming application sessions, but they may not necessarily do it in the order you've described in your image, it'll be random across the available web deployments
I have deployed two POD-s with hostnetwork set to true. When the POD-s are deployed on same OpenShfit node then everything works fine since they can discover each other using node IP.
When the POD-s are deployed on different OpenShift nodes then they cant discover each other, I get no route to host if I want to point one POD to another using node IP. How to fix this?
The uswitch/kiam (https://github.com/uswitch/kiam) service is a good example of a use case.
it has an agent process that runs on the hostnetwork of all worker nodes because it modifies a firewall rule to intercept API requests (from containers running on the host) to the AWS api.
it also has a server process that runs on the hostnetwork to access the AWS api since the AWS api is on a subnet that is only available to the host network.
finally... the agent talks to the server using GRPC which connects directly to one of the IP addresses that are returned when looking up the kiam-server.
so you have pods of the agent deployment running on the hostnetwork of node A trying to connect to kiam server running on the hostnetwork of node B.... which just does not work.
furthermore, this is a private service... it should not be available from outside the network.
If you want the two containers to be share the same physical machine and take advantage of loopback for quick communications, then you would be better off defining them together as a single Pod with two containers.
If the two containers are meant to float over a larger cluster and be more loosely coupled, then I'd recommend taking advantage of the Service construct within Kubernetes (under OpenShift) and using that for the appropriate discovery.
Services are documented at https://kubernetes.io/docs/concepts/services-networking/service/, and along with an internal DNS service (if implemented - common in Kubernetes 1.4 and later) they provide a means to let Kubernetes manage where things are, updating an internal DNS entry in the form of <servicename>.<namespace>.svc.cluster.local. So for example, if you set up a Pod with a service named "backend" in the default namespace, the other Pod could reference it as backend.default.svc.cluster.local. The Kubernetes documentation on the DNS portion of this is available at https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
This also avoids the "hostnetwork=true" complication, and lets OpenShift (or specifically Kubernetes) manage the networking.
If you have to absolutely use hostnetwork, you should be creating router and then use those routers to have the communication between pods. You can create ha proxy based router in opeshift, reference here --https://docs.openshift.com/enterprise/3.0/install_config/install/deploy_router.html
I have a service created as a headless service that is intended to map to a range of external IP addresses provided by a separate k8s endpoints object. If one of the external nodes were to die, is there any way for me to remove the specific endpoint from the service automatically?
You can use kubectl patch to edit whatever object you want.
Since it's an external IP and Kubernetes is therefore not aware of it, you will need to provide the mechanism to automate the deletion, like using a job you run periodically or some sort of callback.
I'm thinking of deploying simple haproxy pods with configuration taken either from configmap (list of IPs) or directly from the other external service, to be able to add healthchecks. Config change might also be automated by confd inside this haproxy container. And these haproxy pods would be exposed as a Service in Kubernetes to the other apps.