I need to run Hyperledger-Fabric instances on 4 different machines PC-1 should contain CA and peers of ORG-1 in containers, Pc-2 should contain CA and peers of ORG-2, PC-3 should contain orderer(solo) and PC-4 should Node api Is my approach missing something ? if not how can I achieve this?
I would recommend that you look at the Ansible driver in Hyperledger Cello project to manage deployment across multiple hosts/vms.
In short, you need to establish network visibility across the set of host/vm nodes such that the peer knows about the orderer to which it will connect and so that gossip can operate. The Cello project does this for you with a set of driver options. The Ansible driver seems to have the most promise.
The Ansible driver can provision to a variety of cloud platforms including AWS, Azure, OpenStack and bare metal.
Related
We have an app that uses UDP broadcast messages to form a "cluster" of all instances running in the same subnet.
We can successfully run this app in our (pretty std) local K8s installation by using hostNetwork:true for pods. This works because all K8s nodes are in the same subnet and broadcasting is possible. (a minor note: the K8s setup uses flannel networking plugin)
Now we want to move this app to the managed K8s service # AWS. But our initial attempts have failed. The 2 daemons running in 2 different pods didn't see each other. We thought that was most likely due to the auto-generated EC2 worker node instances for the AWS K8s service residing on different subnets. Then we created 2 completely new EC2 instances in the same subnet (and the same availability-zone) and tried running the app directly on them (not as part of K8s), but that also failed. They could not communicate via broadcast messages even though the 2 EC2 instances were on the same subnet/availability-zone.
Hence, the following questions:
Our preliminary search shows that AWS EC2 does probably not support broadcasting/multicasting, but still wanted to ask if there is a way to enable it? (on AWS or other cloud provider)?
We had used hostNetwork:true because we thought it would be much harder, if not impossible, to get broadcasting working with K8s pod-networking. But it seems some companies offer K8s network plugins that support this. Does anybody have experience with (or recommendation for) any of them? Would they work on AWS for example, considering that AWS doesn't support it on EC2 level?
Would much appreciate any pointers as to how to approach this and whether we have any options at all..
Thanks
Conceptually, you need to create overlay network on top of the VPC native like this. There's a CNI that support multicast and here's the AWS blog about it.
I have recently been reading more about infrastructure as a service (IaaS) and platform as a service (PaaS) and had some questions. I see when we opt for a PaaS solution, it is generally very easy to create the infrastructure as the cloud providers handle that for us and we can even automate the deployment using an infrastructure as code solution like Terraform.
But if we use an IaaS solution or even a local on premise cluster, we lose a lot of the automation it seems that PaaS allows. So I was curious, are there any good tools out there for automating infrastructure deployment on a local cluster that is not in the cloud?
The best thing I could think of was to run a local Kubernetes cluster and then Dockerize each of the infrastructure components, but this seems difficult as each node in the cluster will need its own specific configuration files.
From my basic Googling, it seems like there is not a good solution to this.
Edit:
I was not clear enough with my original intentions. I have two problems I am trying to solve.
How do I automate infrastructure deployment locally? For example, suppose I wanted to create a Hadoop HDFS cluster. I would need to configure one node to be the namenode with an accessible IP, and the other nodes to be datanodes that are aware of the namenode's IP. At the moment, I have to do this manually by logging into each node, checking it's IP, and then configuring each one. How would I automate this? If I were to use a Kubernetes approach, how do I specify that one of the running pods needs to be the namenode and the others are datanodes? How do I find the pods' IPs and have them be aware of the namenode IP?
The next problem I have is very similar to the first, but a slight modification. How would I deploy specific configuration files to each node. For instance in Kafka, the configuration file for one node, requires the IPs of the Zookeeper nodes, as well as the IP it should listen on. This may be different for every node in the cluster. Is there a good way to make these config files pod specific, so that I do not have to do bash text processing to insert the correct contents into each pod's config files?
You can use Terraform for all of your on-premise Infra. Automation, and Ansible for configuration management.
Let's say you have three HPE servers, Install K8s or VMware on them using Ansible, then you can treat them as three Avvaliabilty zones in one region, same as AWS. from this you can start deploying dockerize apps, or helm charts using Terraform.
Summary:
Ansbile for installing and configuration K8s.
Terraform for provisioning K8s.
Helm for installing apps on K8s.
After this you gonna have a base automated on-premise Infra.
I am a newbie in Kubernetes.
I have 19 LAN servers with 190 machines.
Each of the 19 LANs has 10 machines and 1 exposed IP.
I have different websites/apps and their environments that are assigned to each LAN.
how do I manage my Kubernetes cluster and do setup/housekeeping.
Would like to have a single portal or manager to manage the websites and environment(dev, QA, prod) and keep isolation.
Is that possible?
I only got a vague idea of what you want to achieve so here goes nothing.
Since Kubernetes has a lot of convenience tools for setting a cluster on a public cloud platform, I'd suggest to start by going through "kubernetes-the-hard-way". It is a guide to setup a cluster on Google Cloud Platform without any additional scripts or tools, but the instructions can be applied to local setup as well.
Once you have an operational cluster, next step should be to setup an Ingress Controller. This gives you the ability to use one or more exposed machines (with public IPs) as gateways for the services running in the cluster. I'd personally recommend Traefik. It has great support for HTTP and Kubernetes.
Once you have the ingress controller setup, your cluster is pretty much ready to use. Process for deploying a service is really specific to service requirements but the right hand rule is to use a Deployment and a Service for stateless loads, and StatefulSet and headless services for stateful workloads that need peer discovery. This is obviously too generalized and have many exceptions.
For managing different environments, you could split your resources into different namespaces.
As for the single portal to manage it all, I don't think that anything as such exists, but I might be wrong. Besides, depending on your workflow, you can create your own portal using the Kubernetes API but it requires a good understanding of Kubernetes itself.
We have a 5-node Azure Service Fabric Cluster as our main Production microservices hub. Up until now, for testing purposes, we've just been pushing out separate versions of our applications (the production application with ".Test" appended to the name) to that production SFC.
We're looking for a better approach, namely a separate test Service Fabric Cluster. But the issue comes down to costs. The smallest SFC you can create in Azure is 3 nodes. Further, you can't shutdown a SFC when it's not being used, which we would also need to do to save on costs.
So now I'm looking at just spinning up a plain Windows VM in Azure and installing the local Service Fabric Cluster app (which allows just one-node setup). Is it possible to do this and be able to communicate with the cluster from outside the VM?
What you are trying to accomplish is setup a standalone cluster. The steps to do it is documented in this docs.
Yes, you can access the cluster from outside the VM, In simple terms enable access to the network and open the firewall ports.
Technically both deployments(Guide and DevCluster) are very similar, the main difference is that you have better control on the templates following the standalone guide, using the development setup you don't have much options and all the process is automated.
PS: I would highly recommend you have a UAT\Staging cluster with the
exact same specs as the production version, the approach you used
could be a good idea for staging environment. Having different
environments increase the risk of issues, mainly related to
configuration and concurrency.
Does Kubernetes have the ability/need to hook into a cloud provider (AWS, Rackspace) to spin up new nodes? If so, how does it then provision the node - does it run Ansible etc? Or will Kubernetes need to have all the nodes available to it manually?
The short answer is no.
The longer answer is explained in the following blog posting that describes the new kubeadm command:
http://blog.kubernetes.io/2016/09/how-we-made-kubernetes-easy-to-install.html
There are three stages in setting up a Kubernetes cluster, and we
decided to focus on the second two (to begin with):
Provisioning: getting some machines
Bootstrapping: installing Kubernetes on them and configuring certificates
Add-ons: installing necessary cluster add-ons like DNS and monitoring services, a pod network, etc
We realized early on that there's enormous variety in the way that
users want to provision their machines.
They use lots of different cloud providers, private clouds, bare
metal, or even Raspberry Pi's, and almost always have their own
preferred tools for automating provisioning machines: Terraform or
CloudFormation, Chef, Puppet or Ansible, or even PXE booting bare
metal. So we made an important decision: kubeadm would not provision
machines. Instead, the only assumption it makes is that the user has
some computers running Linux.
Update
http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html