What is the difference between a fabric container and a standalone container? - redhat

While going through Redhat Fuse ESB documentation , I found mention of fabric containers as something different from stand-alone container. Are Fabric containers virtual/logical containers?
Link : https://access.redhat.com/documentation/en-US/Fuse_ESB_Enterprise/7.1/html/Deploying_into_the_Container/files/FESBLocateFabric.html

Fabric containers are real JVMs that are started and controlled by Fabric servers. They are not 'virtual' containers but are real JVM processes.
Standalone containers are single JVMs that monitor their "deploy" folder by default to look for artifacts to deploy. You can start a standalone Fuse server by simply running bin/fuse. This server will not contact any other Fuse servers.
A Fabric is a clustered group of Fuse instances. Because the cluster needs to distribute its artifacts according to some configuration it doesn't look at its deploy folder anymore (it ignores the contents) but uses "profiles" which are stored on the Fabric servers.
If you would create a cluster of 3 hardware servers, you would run 3 fabric servers on them.
On the first server, you start Fuse by running bin/start.
Then run bin/client -r 10 to connect to the server.
You now still have a standalone instance. To turn it into a Fabric server run fabric:create --clean --wait-for-provisioning
On the other two servers, you start Fuse the same way, but instead of running fabric:create you run fabric:join with the relevant arguments to have them connect to the first server.
You'll notice that when you look at the administration console of the first server you'll see the other 2 servers as well, and you will be able to start fabric containers on any one of those 3 servers. You can also attach profiles to those containers.

Related

External Chaincode Pod on Kubernetes in Hyperledger Fabric v1.4

By what I have seen so far, in a Hyperledger Fabric v1.4 network that has been deployed using Kubernetes, the chaincode container and the peer container co exist within the same pod. An example for the same can be found in this link https://medium.com/#oap.py/deploying-hyperledger-fabric-on-kubernetes-raft-consensus-685e3c4bb0ad . Is it possible to have a deployment where the chaincode container and the peer container exist in two separate pods? If yes, how do I go about implementing this in Hyperledger Fabric v1.4? By my research, it is possible to do so in Hyperledger Fabric v2.1 using external chaincode launchers. However, I am restricted to Fabric v1.4 currently.
As you point out, Fabric v2.0 introduced external builders which are specifically targeted to allow operators to choose how their chaincodes are built and executed. With external builders it's certainly possible to trigger creation of a separate pod to launch the chaincode in.
Unfortunately, in Fabric v1.4.x there is a strong dependency on Docker. You could potentially launch your docker daemon in a separate privileged pod, and securely authenticate to it via TLS, and launch your chaincodes there. You can see the docker daemon connection configuration in the sample core.yaml.
As a warning, I'm unaware of any users which are deploying peers connecting to a remote docker daemon. I don't see any reason it should not work, but it's also not a well tested path. As external builders are available in more recent versions of Fabric, I don't expect a large amount of community support for novel docker configurations.

Is there any way to deploy multi-container application in K8S single node for production?

What i want do is deployment of multiple container application in...
In RHEL os
RedHat Supportable product (if possible)
In single node K8S cluster (Bare metal machine)
So I found several way but I concerned about..
minikube, minishift, OKD, CodeReady Container
First, they run in VM but what I want is run in HOST.
Second, their doc said they are not for production environment.
So, Is there any PaaS for single-node cluster as production environment?
Docker, Docker-compose
Deployment target OS should maybe RHEL8. I guess it is not good idea to use docker because RedHat product is moving away from docker. Even in RHEL8 repository, there is no docker rpm for el8 yet.
My question is
Is there any PaaS for single-node cluster as production environment?
If not exist, docker-compose is best?
It was already mentioned, you should not use single node setup in production environment.
You should not do that because, if your servers drops you have service offline. There is nothing to switch to, nothing that might continue the process that was being worked on.
If you still want to setup a single node Kubernetes cluster you can do that using kubeadm. I think this would be closest to production grade as you can get.
Other then that as an alternative you can play with Installing Kubernetes with Minikube or Install a local Kubernetes with MicroK8s.
It's up to you which one you will choose but you need to remember this should not be running as a production, this should be a lab or a test environment which if works as expected will be migrated into few node production grade cluster.
As for PaaS as a single node there is Dokku.
Docker powered mini-Heroku. The smallest PaaS implementation you've ever seen.
And if you would consider using a cloud for PaaS, you can choose from AWS Cloud9, Azure App Service or Google App Engine.
Single node cluster is not recommended for production applications. You need scalability, high availability, fault tolerance for production apps. You must have more than one node to have these features.

Minikube out of resources

Our company use Kubernetes in all our environments. as well as on our local Macbook using minikube.
We have many microservices and most of them are running JVM which require a large amount of memory. We started to face an issue that we cannot run our stack on minikube due to out of memory of the local machine.
We thought about multiple solutions:
the first was to create a k8s cloud development environment and when a developer is working on a single microservice on his local macbook he will redirect the outbound traffic into the cloud instead of the local minikube. but this solution will create new problems:
how a pod inside the cloud dev env will send data to the local developer machine? its not just a single request/response scenario
We have many developers, they can overlap each other with different versions of each service they need to be deploy on the cloud. (We can set each developer separate namespace but we will need a huge cluster to support it)
The second solution was maybe we should use a tools like skaffold or draft to deploy our current code into the cloud development environment. that will solve issue #1 but again we see problems:
Slow development cycle - building a java image and push to remote cloud and wait for init will take too much time for developer to work.
And we still facing issue #2
Antoher though was, kubernetes support multiple nodes, why won't we just add another node, a remote node that sit on the cloud, to our local minikube? The main issue is that minikube is a single node solution. Also, we didn't find any resources for it on the web.
Last thought was to connect minikube docker daemon to a remote machine. so we will use minikube on the local machine but the docker will run the containers on a remote cloud server. But no luck so far, minikube crush when we do this manipulate. and we didn't find any resources for it on the web as well.
Any thought how to solve our issue? Thank you!

Azure Service Fabric - connect to local service fabric cluster from outside the VM it's running on?

We have a 5-node Azure Service Fabric Cluster as our main Production microservices hub. Up until now, for testing purposes, we've just been pushing out separate versions of our applications (the production application with ".Test" appended to the name) to that production SFC.
We're looking for a better approach, namely a separate test Service Fabric Cluster. But the issue comes down to costs. The smallest SFC you can create in Azure is 3 nodes. Further, you can't shutdown a SFC when it's not being used, which we would also need to do to save on costs.
So now I'm looking at just spinning up a plain Windows VM in Azure and installing the local Service Fabric Cluster app (which allows just one-node setup). Is it possible to do this and be able to communicate with the cluster from outside the VM?
What you are trying to accomplish is setup a standalone cluster. The steps to do it is documented in this docs.
Yes, you can access the cluster from outside the VM, In simple terms enable access to the network and open the firewall ports.
Technically both deployments(Guide and DevCluster) are very similar, the main difference is that you have better control on the templates following the standalone guide, using the development setup you don't have much options and all the process is automated.
PS: I would highly recommend you have a UAT\Staging cluster with the
exact same specs as the production version, the approach you used
could be a good idea for staging environment. Having different
environments increase the risk of issues, mainly related to
configuration and concurrency.

How to do multi-tiered application deployment using Docker?

I want to use following deployment architecture.
One machine running my webserver(nginx)
Two or more machines running uwsgi
Postgresql as my db on another server.
All the three are three different host machines on AWS. During development I used docker and was able to run all these three on my local machine. But I am clueless now as I want to split those three into three separate hosts and run it. Any guidance, clues, references will be greatly appreciated. I preferably want to do this using docker.
If you're really adamant on keeping the services separate on individual hosts then there's nothing stopping you from still using your containers on a Docker installed EC2 host for nginx/uswgi, you could even use a CoreOS AMI which comes with a nice secure Docker instance pre-loaded (https://coreos.com/os/docs/latest/booting-on-ec2.html).
For the database use PostgreSQL on AWS RDS.
If you're running containers you can also look at AWS ECS which is Amazons container service, which would be my initial recommendation, but I saw that you wanted all these services to be on individual hosts.
you can use docker stack to deploy the application in swarm,
join the other 2 hosts as worker and use the below option
https://docs.docker.com/compose/compose-file/#placement
deploy:
placement:
constraints:
- node.role == manager
change the node role as manager or worker1 or workern this will restrict the services to run on individual hosts.
you can also make this more secure by using vpn if you wish