I want to deploy my application in cloud using Kubernetes based deployment. It consits of 3 layers Kafka, Ignite(as DB and processing) and Python(ML engine).
From Kafka layer we get data stream input which is then passed to Ignite for processing(feature engg). After processing the data is passed to the python
server for further ML predictions. How can I break this monolith application to microservices in Kubernetes?
Also can using Istio provide some advantage?
You can use the bitnami/kafka on docker hub from bitnami if you want pre-build image.
Export the image to your container registry with the gcloud command.
gcloud docker -- push [your image container registry path]
Deploy the images using UI or gcloud command
Expose the port{2181 9092-9099} or which one is exposed in the pulled image after the deployment on kubernetes.
Here is the link of the Ignite image on Google Compute, you have just to deploy it on the kubernetes engine and expose the appropriate ports
For python you have just to Build your python app using dockerfile as ignacio suggested.
it is possible and in fact those tools are easy to deploy in Kubernetes. Firstly, you need to gain some expertise in Kubernetes basics, specially in statefulsets and persistent volumes, since Kafka and Ignite are stateful components.
To deploy a Kafka cluster in Kubernetes follow instructions form this repository: https://github.com/Yolean/kubernetes-kafka
There are other alternatives, but this is the only one I've tested in production environments.
I have not experience with Ignite, this docs provides a step-by-step guide. Maybe someone else could share other resources.
About Python, just dockerize your ML model as any other Python app. In the official docker image for Python you'll find a basic Dockerfile to do that. Once you have your docker image pushed to a registry, just create a YAML file describing the deployment and apply it to Kubernetes.
As an alternative for the last step, you can use Draft to dockerize and deploy Python code.
Good luck!
Related
I have some team members that don't have permission to pull docker image locally on their machine and run as local instance of mongodb and mongoexpress.
So am planning to deploy as mongodb and mongoexpress as pods in Openshift to access locally. Can anyone provide the steps to do that in Openshift? Or else proper resource where I can find information / steps.
I am new to openshift.
Can anyone provide the steps to do that in Openshift?
This tutorial explains how apps (containers) are deployed from images in OpenShift
Or else proper resource where I can find information/steps
It depends on your needs, the main question is if you really need container orchestration tools or no. If you need them then you can consider installing them locally:
Docker for Windows
Minikube
or in cloud:
Google Kubernetes Engine aka GKE (it allows you creating basic Kubernetes cluster in a few clicks)
OpenShift (I haven't been dealing with it yet)
from what I've already seen, Kubernetes provides a lot of documentation (with examples, etc) on topic.
Last but not least, there is a really nice step-by-step guide on how to create Kubernetes cluster from scratch "the hard way" if you need the cluster to be fully managed by you.
We're trying to set up a Storm cluster with Google Compute Engine but are having a hard time finding resources. Most Tutorials only cover deploying single applications to GCE. We've dockerized the project but don't know how to deploy to GCP. Any Suggestions?
You may try to configure an instance template and create instances with COS image which already have Docker installed.
Here you can have more information about this.
Other option is using Kubernetes Engine (GKE) which has more features that can help you to have more control on your workloads and it also supports autoscaling, auto upgrades and node repairs.
I am evaluating a migration of an application working with docker-compose to Kubernates and came across two solutions: Kompose and compose-on-kubernetes.
I'd like to know their differences in terms of functionality/ease of use to make decision of which one is more suited.
Both product provide a migration path from docker-compose to Kubernetes, but they do it in a slightly different way.
Compose on Kubernetes runs within your Kubernetes cluster and allows you to deploy your compose setup unchanged on the Kubernetes cluster.
Kompose translates your docker-compose files to a bunch of Kubernetes resources.
Compose is a good solution if you want to continue running using docker-compose in parallel to deploying on Kubernetes and so plan to keep the docker-compose format maintained.
If you're migrating completely to Kubernetes and don't plan to continue working with docker-compose, it's probably better to complete the migration using Kompose and use that as the starting point for maintaining the configuration directly as Kubernetes resources.
I have achieved vast amount of automation in terms of creating projects, creating kubernetes engine and other IaaS elements, by using GCP APIs from Python GCP Client.
But I am not very positive on deploying docker container workloads to the provisioned cluster. The GCP documents point to kubectl apply -f config.yaml, but this entails using command line tools by first switching to project etc...
This is exactly what I am trying to get away from. Is there a google API that lets us accomplish this?
And no, I do not want third party deployment automation tools for various reasons.
You can use Kubernetes client library to deploy workload programatically.
Here is some client for kubernetes:
Go client: client-go
Java client: kubernetes-client/java
Python client: kubernetes-client/python
I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.