deploy corda nodes on kubernetes containers - deployment

I have a corda example and spring web server built and deployed in azure vm.
Now I would like to try run each node in kubernetes containers. Any references?

Yeah so this is totally doable. Take a look at this example repo I found online on running corda docker containers in a docker compose cluster along with the individual processes.
Hope this helps:
https://github.com/EricMcEvoyR3/corda-docker-compose

Related

Is there any way to deploy multi-container application in K8S single node for production?

What i want do is deployment of multiple container application in...
In RHEL os
RedHat Supportable product (if possible)
In single node K8S cluster (Bare metal machine)
So I found several way but I concerned about..
minikube, minishift, OKD, CodeReady Container
First, they run in VM but what I want is run in HOST.
Second, their doc said they are not for production environment.
So, Is there any PaaS for single-node cluster as production environment?
Docker, Docker-compose
Deployment target OS should maybe RHEL8. I guess it is not good idea to use docker because RedHat product is moving away from docker. Even in RHEL8 repository, there is no docker rpm for el8 yet.
My question is
Is there any PaaS for single-node cluster as production environment?
If not exist, docker-compose is best?
It was already mentioned, you should not use single node setup in production environment.
You should not do that because, if your servers drops you have service offline. There is nothing to switch to, nothing that might continue the process that was being worked on.
If you still want to setup a single node Kubernetes cluster you can do that using kubeadm. I think this would be closest to production grade as you can get.
Other then that as an alternative you can play with Installing Kubernetes with Minikube or Install a local Kubernetes with MicroK8s.
It's up to you which one you will choose but you need to remember this should not be running as a production, this should be a lab or a test environment which if works as expected will be migrated into few node production grade cluster.
As for PaaS as a single node there is Dokku.
Docker powered mini-Heroku. The smallest PaaS implementation you've ever seen.
And if you would consider using a cloud for PaaS, you can choose from AWS Cloud9, Azure App Service or Google App Engine.
Single node cluster is not recommended for production applications. You need scalability, high availability, fault tolerance for production apps. You must have more than one node to have these features.

Docker desktop kubernetes add node

I running docker desktop with kubernetes option turned on. I have one node called docker-for-dektop.
Now i have created a new ubuntu docker container. I want to add this container to my kubernetes cluster. Can be done? how can i do it?
As far as I'm aware, you cannot add a node to Docker for Desktop with Kubernetes enabled.
Docker for Desktop is a single-node Kubernetes or Docker Swarm cluster, you might try using kubernetes-the-hard-way as this explains how to setup a cluster and add nodes manually without the use of kubeadm.
But I don't think this might work as there will be a lot of issues with setting up the network to work correctly.
You can also use the instructions on how to install kubeadm with kubelet and kubectl on Linux machine and adding a node using kubeadm join.

Convert monolith application to microservice implementation in Kubernetes

I want to deploy my application in cloud using Kubernetes based deployment. It consits of 3 layers Kafka, Ignite(as DB and processing) and Python(ML engine).
From Kafka layer we get data stream input which is then passed to Ignite for processing(feature engg). After processing the data is passed to the python
server for further ML predictions. How can I break this monolith application to microservices in Kubernetes?
Also can using Istio provide some advantage?
You can use the bitnami/kafka on docker hub from bitnami if you want pre-build image.
Export the image to your container registry with the gcloud command.
gcloud docker -- push [your image container registry path]
Deploy the images using UI or gcloud command
Expose the port{2181 9092-9099} or which one is exposed in the pulled image after the deployment on kubernetes.
Here is the link of the Ignite image on Google Compute, you have just to deploy it on the kubernetes engine and expose the appropriate ports
For python you have just to Build your python app using dockerfile as ignacio suggested.
it is possible and in fact those tools are easy to deploy in Kubernetes. Firstly, you need to gain some expertise in Kubernetes basics, specially in statefulsets and persistent volumes, since Kafka and Ignite are stateful components.
To deploy a Kafka cluster in Kubernetes follow instructions form this repository: https://github.com/Yolean/kubernetes-kafka
There are other alternatives, but this is the only one I've tested in production environments.
I have not experience with Ignite, this docs provides a step-by-step guide. Maybe someone else could share other resources.
About Python, just dockerize your ML model as any other Python app. In the official docker image for Python you'll find a basic Dockerfile to do that. Once you have your docker image pushed to a registry, just create a YAML file describing the deployment and apply it to Kubernetes.
As an alternative for the last step, you can use Draft to dockerize and deploy Python code.
Good luck!

how to run an onpremise service fabric cluster in docker containers for windows?

I am not sure if this is possible, but if it is and someone have the experience to do so, could we not create a docker image for windows that represent an node?
I imaging that we will have a folder with configuration files, that can be mounted with docker -v
then if one needed a 5 node cluster, i would just run
docker run -v c:/dev/config:c:/config microsoft/servicefabric create-node --someOptions
for each node we wanted.
Is there any barriers for doign this? have anyone create the docker images for doign so? This would really simplify setting up a cluster on premise.
Using the 6.1 release you can run a cluster in a container, for dev/test purposes.
I'm not sure if you can get it to work with multiple containers though.
Service Fabric Linux Clusters in a Container
We have provided a
pre-configured Docker container image to run Service Fabric Linux
clusters in a container. The main scenario is to provide a
light-weight development experience for MacOS, but the container will
also run on Linux and Windows, using Docker CE.
To get started, follow the directions here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started-mac
and
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-local-linux-cluster-windows

Orchestrating containers

I'm trying to use the Kubernetes to deploy Docker Container and I found this tutorial.
So according to this tutorial, what is the prerequisites?
They said that "services that are typically on a separate Kubernetes master system and two or more Kubernetes node systems are all running on a single system."
But I don't understand how we run both master and nodes on a single system (for example I have one instance EC2 with IP address 52.192.x.x)
That is a guide about running Kubernetes specifically on RedHat Atomic nodes. There are lots of guides about running Kubernetes on other types of nodes; see the Creating a Kubernetes Cluster page on docs.k8s.io.
One of the guides on the Kubernetes site shows how to run a local docker-based cluster, which should also work for you on a single node in the cloud.