How to deploy Hyperledger Fabric V1.0 network in production level? - ibm-cloud

I have setup a Hyperledger Fabric V1.0 Network by following the Hyperledger-fabric docs and using fabric-sdk-java client I am able to communicate with the network from my java application. Now everything is working fine in the development setup. But still I am not getting the clear picture about its production level implemenation. Looking for some valuable suggestions for the following points to make it production live.
Will it be possible to use this setup for production? then how can I build my network using this docker-compose setup? Which are the options available for production hosting of the network?
If it is possible to setup in production, should I run this docker-compose set up and all in all the peer system's, then how will I configure the docker-compose.yaml to define each of the peers/organisations which are in different system?
I have found Bluemix Blockchain Service as an alternative, but it is having high monthly charges. So is there any alternative to deploy myown Hyperledger Fabric V1.0 network by defining myown peers and organization?

I think that for a production deployment, you'd likely want to implement Swarm or Kubernetes. See Hyperledger Cello for instance. You will also want to have a process and automation for managing the code going forward. Updating images, chaincode, etc. Further, you might want to further automate some of the on-boarding process which at present is rather bare bones.
As noted above, the Docker Compose is designed for a single system. You'd likely want to use Swarm or Kubernetes to manage nodes on different systems and you want decentralized operations when you are engaging multiple entities into a consortia where the members want to choose where they run their nodes.
There is a developer sandbox offering that you can deploy to IBM's Container service (Kubernetes) but you won't be getting the benefits of the crypto acceleration, HSM, and added security of the LinuxOne platform on which IBM deploys the IBM Blockchain Platform. The good things in life may be free, but I would want to have the added value of a vendor provided cloud offering like IBM Blockchain Platform for my production system. YMMV.

Related

How to deploy code to IBM Cloud using DevOps approach?

Can somebody bring some light in IBM Cloud deployments tools / platforms whatever?
I am new to it, so I am looking at their docs, watching videos and still i am confused.
What I want to achieve is typical scenario fetch code from repo, build it, test it, deploy it to cloud. I found strategies / platforms how to achieve that and i still can't see differences advantages / disadvantages between them.
So we have:
toolchain
cloud foundry
code engine
Continuous Delivery (service)
and maybe something more? :)
I am looking at Cloud Foundry explained video and the guy is saying if you want to do not care about the bottom part like networking, security, containers you can choose deploy using K8S service. Wtf? So from total automatic thing you can now handle something in the cloud foundry by yourself. So for me its total mix of everything together and i don't know now which tool / platform / strategy to use.
Any comment is appreciated.
It all depends on your requirements.
IBM DevOps/Continuous Delivery/Toolchanis is set of services that can build and deploy your application to given runtime. You can find various tutorials here - https://www.ibm.com/cloud/architecture/courses/toolchain-tutorials. These tutorials shows you various things that you can embed in your build pipelines (like code scanning with CRA, image scanning, signing etc)
These runtimes can be different depending on your requirements:
CloudFoundry, where you deploy app using a buildpak, but this is rather fading technology, so I wouldn't recommend that
as docker image in K8s/OpenShift cluster - use this if your organization is planning or already utilizing Docker/Kubernetes/OpenShift. You will need to create K8s/OpenShift cluster first.
as serverless app, using the IBM Code Engine
If you are just starting and just want to deploy simple, single app to the Cloud I'd consider using IBM Code Engine and not investigate Toolchains for now. Check basic demo here - How to deploy source code with IBM Cloud Code Engine

What benefits does Cloud Composer provide over a Helm chart and GKE?

As I dive into the world of Cloud Composer, Airflow, Google Kubernetes Engine, and Kubernetes I've not yet found a good answer to what exactly makes Cloud Composer better than Helm and GKE.
Here are some things I've found that could be unique to Composer but mostly seem like they could be handled by GKE.
On their homepage:
End-to-end integration with Google Cloud products including BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, and AI Platform gives users the freedom to fully orchestrate their pipeline.
On the features page:
Identity-Aware Proxy protects the interface
Cloud Composer associates a Cloud Storage bucket with the environment. The associated bucket stores the DAGs, logs, custom plugins, and data for the environment.
The downsides of Composer I've seen include:
It takes many hours to spin up a new instance
It doesn't support Kubernetes Executor
It is risky to change the underlying GKE config because it could be changed back by a composer update
There are often errors that happen when auto-scaling often happen but are documented as known
Upgrading environments is still beta
To be clear, I'm not saying Cloud Composer is bad. I'm just having trouble seeing why people like it. When I've asked folks why it is better than Helm + GKE they haven't had any compelling answers despite that they can tell many stories of Composer being unpredictable and having lots of issues.
Are you comparing the same things?
On one side, GKE, you have a container orchestrator. Declare that you want, it will deploy and maintain the stability of the cluster according with declared configuration. This configuration can be packaged with helm to write it in an easier mode. Because you deploy container, you can use the language that you want in your services.
On the other side, you have a workflow manager, with scheduler, retry policies, parallel task, context forwarding. you write DAG in python (only!) and you have operators to interact with external product/services. It's mainly designed for data processing and used a lot by data scientist and data engineering team.
Note: Cloud Composer is deployed on top of GKE (scheduler and worker), redis, app engine and Cloud SQL.
You compare 2 different worlds: Ops world (GKE/Helm) and the App/Data world (Composer/Airflow). Have a look to this new video
Update 1:
My bad, I didn't understand!!! Anyway, personally I don't want to manage things by myself: a cluster, the update of K8S, VM patching, replicas, snapshot, backup/restore,...
If someone can do this for me, I prefer, and managed services are perfect for me!!
Do you ask yourselves this question about Cloud SQL and a database managed by yourselves on a Compute Engine instance? If not (because Cloud SQL solve a lot of boring issues), my opinion is the same for Composer.
But it's an opinion, I didn't test both and compare the performance, cost and easiness.

local development of microservices, methods and tools to work efficiently

I work with teams members to develop a microservices architecture but I have a problem with the way to work. Indeed, I have too many microservices and when I run them during my development, it consumes too memory even with a good workstation. So I use docker compose to build and execute my MSA but it takes a long time. One often hears about how technically build an MSA but never about the way to work efficiently to build it. How do you do in this case ? How do you work ? Do you use tools or any others to improve and facilitate your developments. I've heard about skaffold but I don't see what the difference is with docker compose or with a simple ci/cd in a cluster env for example. Feel free to give tips and your opinion. Thanks
I've had a fair amount of experience with microservices and local development and here's been some approaches I've seen:
Run all the things locally on docker or k8. If using k8, then a tool like skaffolding can make it easier to run and debug a service locally in the IDE but put it into your local k8 so that it can communicate with other k8 services. It works OK but running more than 4 or 5 full services locally in k8 or docker requires dedicating a substantial amount of CPU and memory.
Build mock versions of all your services. Use those locally and for integration tests. The mock services are intentionally much simpler and therefore easier to run lots of them locally. Obvious downside is that you have to build mock version of every service, and you can easily miss bugs that are caused by mock services not behaving like the real service. Record/replay tools like Hoveryfly can help in building mock services.
Give every developer their own Cloud environment. Run most services in the cloud but use a tool like Telepresence to swap locally running services in and out of the cloud cluster. This eliminates the problem of running too many services on a single machine but can be spendy to maintain separate cloud sandboxes for each developer. You also need a DevOps resource to help developers when their cloud sandbox gets out of whack.
Eliminate unnecessary microservice complexity and consolidate all your services into 1 or 2 monoliths. Enjoy being able to run everything locally as a single service. Accept the fact that a microservice architecture is overkill for most companies. Too many people choose a microservice architecture upfront before their needs demand it. Or they do it out of fear that they will need it in the future. Inevitably this leads to guessing how they should decompose the system into many microservices, and getting the boundaries and contracts wrong, which makes it just as hard or harder to fix in the future compared to a monolith. And they incur the costs of microservices years before they need to. Microservices make everything more costly and painful, from local development to deployment. For companies like Netflix and Amazon, it's necessary. For most of us, it's not.
I prefer option 4 if at all possible. Otherwise option 2 or 3 in that order. Option 1 should be avoided in my opinion but it is probably the option everyone tries first.
In GKE and assuming you have a private cluster. You can utilize port forwarding while hooked up to the GKE environment through the CLI. Create a script that forwards your local ports to the GKE environment. I believe on the services tab in your cluster is where you will find the "port-forwarding" button that will give you the CMD command. This way you can work on one microservice with all of its traffic being routed to the actual DEV cluster. This prevents you from having to run multiple projects at the same time.
I would say create a staging environment which will have all services running. This staging environment will specifically be curated for development. E.g. if it's deployed using k8s then you expose some ports using nodeport service if you need them for your specific microservice. And have a DevOps pipeline to always keep this environment up to date with the code.
This environment should always be built from master branch. If you have single repo for app or repo per service, it's fair assumption that the will always have most recent code when you create your dev/feature branch.
Then when you want to develop a feature or fix a bug you checkout your microservice. And if you are following the microservice pattern appropriately, that single microservice should be an executable and have it's own docker file and should be debuggable from your local IDE. Many enterprises follow this pattern, and enforce at the organization level that the master branch is always production ready and high quality.
Let's say, you discover a bug in some other microservice running in k8s cluster. You will very likely get tempted to find a way to debug that remote microservice. However, that should be written as a bug for the team that owns the microservice. If your team owns it then you fix it and then start working on your feature. If you really think you need to debug multiple microservices, then I think you have real tight coupling between the services or you don't really need the microservice architecture.

Material on Building a REST api from within a docker container

I'm looking to build an api on a application that is going to run its own docker container. It needs to work with some applications via its REST apis. I'm new to development and dont understand the process very well. Can you share the broad steps necessary to build and release the APIs so that my application runs safely within the docker but externally whatever communication needs to happen they work out well.
For context: I'm going to be working on a Google Compute VM instance and the application I'm building is a HyperLedger Fabric program written in GoLang.
Links to reference material and code would also be appreciated.
REST API implementation is very easy in Go. You can use the inbuilt net/http package. Here's a tutorial which will help you understand its usage. https://tutorialedge.net/golang/creating-restful-api-with-golang/
Note : If you are planning on developing a production server, the default HTTP client is not recommended. It will knock down the server on heavy frequency calls. In that case, you have to use a custom HTTP client as described here, https://medium.com/#nate510/don-t-use-go-s-default-http-client-4804cb19f779
For learning docker I would recommend the docker docs they're very good and cover a handful of stuff. Docker swarm and orchestration are useful things to learn but most people aren't using docker swarm anymore and use things like kubernetes instead. Same principles, but different tech. I would definitely go through this website: https://docs.docker.com/ and implemented on your own computer. Then just practice by looking at other peoples dockerfiles and building your own. A good understanding a linux will definitely help with installing packages and so on.
I haven't used go myself but I suspect it shouldn't be too hard to deploy into a docker container.
The last production step of deployment will be similar for whatever your using if it's docker or no docker. The VM will need an webserver like apache or nginx to expose the ports you wish to use to the public and then you will run the docker container or the go server independently and then you'll have your system!
Hope this helps!

How to integrate Watson IOT service to Hyperledger Fabric?

Due to IBM doesn't provide free plan for IBM Blockchain anymore, I come up with with solution to integrate Watson IOT to Hyperledger Fabric instead of IBM Blockchain.
I found this document, it say that Watson IoT Platform blockchain integration supports connecting to both IBM Blockchain fabrics and Hyperledger fabrics
(in section Config Blockchain IBM environment)
But I can not find any guideline.
Anyone can help?
I have several related comments:
1) The page you linked to shows an early version of the IoT Contract Platform that I authored. I have not been funded to port it to Hyperledger v1 so it must be considered deprecated at this time. Instead, I suggest that you get comfortable with the Hyperledger Composer, which provides a huge development environment and a powerful data modelling language.
https://hyperledger.github.io/composer/introduction/introduction.html
2) Which leads me to IBM's free container service. If you want to get started with IBM Blockchain on Bluemix, you can create a free kubernetes cluster using the instructions found here.
https://ibm-blockchain.github.io/
The "create_all" script gives you a working fabric on a lite cluster (as in free) with hyperledger composer running (with playground) and with a copy of the example02 ubiquitous sample Go chaincode running on the same channel.
https://github.com/IBM-Blockchain/ibm-container-service
EDIT: As for the iot connection, you can use node-red to create iot apps that will catch your events on a topic and then forward them to the blockchain. This is for experimentation of course, but you will get the idea how an application must be written.
If you want to follow my "partial state as event" pattern in composer contracts, you can look at the deep-merge npm project and mimick that code while we wait for the node based chaincode that is coming in Fabric 1.1, at which time I hope that we can import it as normal in our business network js files.
Using deep-merge requires that you create your own transactions for create, replace, update and delete in your smart contracts, but these are straightforward. The bonus is that it is also easy then to emit custom events defining what happened to listening applications.
I think you will like these two technologies together.
Instead of using the IBM Blockchain, you should create your own Blockchain. You should use the Hyperledger Fabric for that. You have the documentacion about it here. I suggest you to start reading from the Building Your First Network chapter.
Then, you should integrate your Blockchain with the Watson IoT.