Hyperledger Fabric version 1.2 multiple hosts deployment - deployment

I've seen a few tutorials online that describes steps for Hyperledger Fabric network multiple hosts deployment. All of the tutorials suggest to use docker swarm for the network deployment.
I wanted to know if using docker swarm is the Hyperledger Fabric multiple hosts deployment standard. Also, I wanted to know if there's a deployment standard for Hyperledger Fabric version 1.2 .

Hyperledger Cello is a blockchain provision and operation system, which helps manage blockchain networks in an efficient way.
You can have a look at this : https://github.com/hyperledger/cello

Related

How does wso2 deploy on Kubernetes without using Google Cloud?

I want to deploy WSO2 API Manager with Kubernetes.
Should I use Google Cloud?
Is there another way?
The helm charts 1 for APIM can be deployed on GKE, AKS, EKS, etc. You can even deploy the all-in-one simple deployment pattern 2 in a local Kubernetes cluster like minikube, etc.
You might have to use a cloud provider for more advanced patterns since they require more resources to run.
All these charts are there as samples to get an idea about the deployment patterns. It is not recommended to deploy those as it is in real production scenarios as the resource requirements and infrastructure vary according to the use cases.
1 - https://github.com/wso2/kubernetes-apim
2 - https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single

External Chaincode Pod on Kubernetes in Hyperledger Fabric v1.4

By what I have seen so far, in a Hyperledger Fabric v1.4 network that has been deployed using Kubernetes, the chaincode container and the peer container co exist within the same pod. An example for the same can be found in this link https://medium.com/#oap.py/deploying-hyperledger-fabric-on-kubernetes-raft-consensus-685e3c4bb0ad . Is it possible to have a deployment where the chaincode container and the peer container exist in two separate pods? If yes, how do I go about implementing this in Hyperledger Fabric v1.4? By my research, it is possible to do so in Hyperledger Fabric v2.1 using external chaincode launchers. However, I am restricted to Fabric v1.4 currently.
As you point out, Fabric v2.0 introduced external builders which are specifically targeted to allow operators to choose how their chaincodes are built and executed. With external builders it's certainly possible to trigger creation of a separate pod to launch the chaincode in.
Unfortunately, in Fabric v1.4.x there is a strong dependency on Docker. You could potentially launch your docker daemon in a separate privileged pod, and securely authenticate to it via TLS, and launch your chaincodes there. You can see the docker daemon connection configuration in the sample core.yaml.
As a warning, I'm unaware of any users which are deploying peers connecting to a remote docker daemon. I don't see any reason it should not work, but it's also not a well tested path. As external builders are available in more recent versions of Fabric, I don't expect a large amount of community support for novel docker configurations.

My question is how to deploy a hyperledger fabric blockchain to kubernetes?

I want to setup my hyperledger blockchain application into kubernetes cluster.
I don't want to encourage questions like this but here are some steps that you could possibly help you:
Ensure your application runs correctly locally on Docker.
Construct your Kubernetes configuration files. What you will need:
A deployment or a statefulset for each of your peers.
A statefulset for the couchdb for each of your peers.
A deployment or a statefulset for each of your orderers.
One service per peer, orderer and couchdb (to allow them to communicate).
A job that creates and joins the channels.
A job that installs and instantiates the chaincode.
Generated crypto-material and network-artifacts.
Kubernetes Secrets or persistent volumes that hold your crypto-material and network-artifacts.
An image of your dockerized application (I assume you have some sort of server using an SDK to communicate with the peers) uploaded on a container registry.
A deployment that uses that image and a service for your application.
Create a Kubernetes cluster either locally or on a cloud provider and install the kubectl CLI on your computer.
Apply (e.g. kubectl apply -f peerDeployment.yaml) the configuration files on your cluster with this order:
Secrets
Peers, couchdb's, orderers (deployments, statefulsets and services)
Create channel jobs
Join channel jobs
Install and instantiate chaincode job
Your application's deployment and service
If everything was configured correctly, you should have a running HLF platform in your Kubernetes cluster. It goes without saying that you have to research each step to understand what you need to do. And to experiment, a lot.

Openstack Heat and Kubernetes Deployment Integration

I want to create an Openstack cluster with HEAT and deploy kubernetes on it, how to integrate HEAT with Kubernetes, any solutions or suggestions?
I think that the magnum service is what you are looking for, since it allows you to create and manage not only kubernetes clusters but also other container orchestrator engines such as docker swarm or mesos, being fully integrated in OpenStack.
Indeed, in its core, magnum uses heat to deploy and configure the nodes of the cluster, so if you don't want to reinvent the wheel, give it a go: (Install guide).

Deploy Kubernetes on Self-host Production environment

I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.