Is there a way to add a service to the System application of the SF or is that a no go scenario? What I'd like to do is to have a meta-data services about other applications in the cluster, where the service can be queried.
The system services are built into the cluster and are not extensible. You can certainly build an application that offers centralized metadata about other applications/services in the cluster but you would need to maintain that metadata yourself.
Related
We are evaluating Dapr for our microservice framework. We will be using Kubernetes. One of the advertised selling points for Dapr is service invocation and service discovery. Doesn't K8s already offer service discovery out of the box?
Short answer: Yes (Kubernetes offers Service Discovery)
While there may be several patterns (and tools being the implementation of these patterns) for Service Discovery, Kubernetes offers service discovery at its core through the Service objects avoiding any needs of using a particular technology or tool to achieve a basic Container-Managed runtime environment.
You can read more on Kubernetes Services in the official documentation.
It is worth noting that dapr is a platform agnostic portable runtime that does not depend on Kubernetes and its core Service Discovery feature.
It offers more features than simply discovering your services (it has been usually compared to Service Meshes tools as the tend to look out being the same):
It offers transparent and secured service-to-service calls
It allows Publish-Subscribe communication style
It offers a way to register triggers and resource bindings (allowing for function-as-a-service development style)
It offers observability out-of-the-box
...
I am not clear enough yet how Service Fabric allows deployment.
From the applications being created in a single VS solution, let me try to ask with file formats for better understanding.
In a single Visual Studio solution, there are
a single .sln
a single .sfproj
multiple .csproj(s)
As I see these files, multiple services (.csproj files) are bound to a single Service Fabric application (.sfproj file), which is under single solution file (.sln file).
Can I individually deploy a .csproj project to the Service Fabric cluster, or are these now bound to a .sfproj so that I have to deploy multiple services (each created with .csproj and bound to .sfproj) together?
The answer to your question is yes and no at the same time. Let me explain it in detail.
Can I individually deploy .csproj project to the Service Fabric cluster
The answer is no you can't deploy a service - in term of Service Fabric the minimal unit of deployment is the application (the .sfproj one). So no matter what changes you have you still need to deploy the application.
But as we all understand performing a full deployment of all application services is very hard, consumes lots of time and causes lots of disturbance to the cluster. To avoid this massive update, all Service Fabric components have their own versions (you can take a closer look at ServiceManifest.xml and ApplicationManifest.xml). So each time application is deployed to the cluster, Service Fabric goes through all services included in the application and updates only components that have been changed (i.e. have different version).
This approach allows you to perform updates of very high granularity i.e. you can update only <Config /> package of the single service.
I have setup a Hyperledger Fabric V1.0 Network by following the Hyperledger-fabric docs and using fabric-sdk-java client I am able to communicate with the network from my java application. Now everything is working fine in the development setup. But still I am not getting the clear picture about its production level implemenation. Looking for some valuable suggestions for the following points to make it production live.
Will it be possible to use this setup for production? then how can I build my network using this docker-compose setup? Which are the options available for production hosting of the network?
If it is possible to setup in production, should I run this docker-compose set up and all in all the peer system's, then how will I configure the docker-compose.yaml to define each of the peers/organisations which are in different system?
I have found Bluemix Blockchain Service as an alternative, but it is having high monthly charges. So is there any alternative to deploy myown Hyperledger Fabric V1.0 network by defining myown peers and organization?
I think that for a production deployment, you'd likely want to implement Swarm or Kubernetes. See Hyperledger Cello for instance. You will also want to have a process and automation for managing the code going forward. Updating images, chaincode, etc. Further, you might want to further automate some of the on-boarding process which at present is rather bare bones.
As noted above, the Docker Compose is designed for a single system. You'd likely want to use Swarm or Kubernetes to manage nodes on different systems and you want decentralized operations when you are engaging multiple entities into a consortia where the members want to choose where they run their nodes.
There is a developer sandbox offering that you can deploy to IBM's Container service (Kubernetes) but you won't be getting the benefits of the crypto acceleration, HSM, and added security of the LinuxOne platform on which IBM deploys the IBM Blockchain Platform. The good things in life may be free, but I would want to have the added value of a vendor provided cloud offering like IBM Blockchain Platform for my production system. YMMV.
I want the ability for clients to create their own stateless services and be able to upload/publish it to join an existing cluster. Is this doable? I understand that I need to update the application manifests dynamically but not sure how or if this is possible programmatically without side effects of the service fabric runtime processes.
The workflow is to upload the code (zipped file maybe or whatever) via an API gateway.
The first thing to keep in mind is that you do not deploy individual services to a Service Fabric cluster. You deploy applications, which can contain one or more services.
So the key question to ask is whether you need the new code to be integrated with an existing application type or not. It sounds like what you're trying to do is just enable multiple clients to deploy independent applications on a shared Service Fabric cluster, in which case you would not be modifying existing application types, but deploying entirely new ones.
Thus, you would need your API gateway to dynamically generate application and service manifests, combine them with the client-provided code to create an application package, then copy, register, and create those applications in the cluster. As far as the Service Fabric runtime is concerned, this looks no different than if you had deployed an application type built and packaged in Visual Studio. Processes running existing applications are not impacted.
I am new to Jboss, want to know if micro services architecture is a right choice on JBOSS. I cannot change the application server as it is decided by client architect and I have no choice.
Want to know whether we can develop micro services with underlying JBOSS application server.
I understand Spring boot comes with embedded tomcat container, which makes it flexible to stop and start, deploy individual service with no impact to other services.
However will that architecture works with JBoss too.
Please suggest.
Thanks,
I actually developed a feasibility study to investigate the solution you mentioned. My conclusion is that it is totally viable to use Micro Service principles in a JBoss Platform.
I used the combination of JBoss \ Spring Boot \ Netflix to create successful Micro Service stack, I personally do that to find a solution to the transaction problem (multiple micro services collaborating) and the fan out problem which caused because excessive Network communication and Serialization costs.
I also wrote a blog about the subject, you might find more details there if you like to, here is the link.
Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka
By the definition what micro services are, then conceptually yes. A micro service is a service that is an independent unit, it could deployed, updated, and undeployed independently without affecting any unrelated part of your application. So that would mean having multiple instances of JBoss for MS and your application calling them through some sort of gateway or any other mechanism depending on your use case. If you plan to deploy all your MS in the same JBoss instance then it defeats the very purpose of a MS. Given that, JBoss wouldn't be a right choice for MS deployment because it will only make your MS deployment infrastructure quite heavy.
Depending on what your client's requirements are, your could possibly keep your webapp in JBoss and deploy your MS containers separately.
It depends on what you want to get out of microservices.
Some of the developers at my organisation looked at Spring Boot but concluded that it's best off being run as a standalone container rather than in JBoss, otherwise you've effectively got two container frameworks competing (SB and JBoss) and a range of associated issues.
Deploying microservices in JBoss won't give you the same flexibility as a true container system like Docker. With Docker you create standalone packages for your microservices that contain all the code, system tools, runtime environment, etc. It can be as small or large as it needs to be. JBoss on the other hand is a large container running a single JVM designed to hold multiple applications. The level of isolation is not the same, and it's not efficient to have JBoss as a container for a single microservice so you have to appropriately size and then deploy to the instance to make use of the resources it has available.
If you're looking at microservices as a way to gain greater control over service lifecycle management (deployment, versioning, deprecating, etc.) as opposed to an automated, web-scale component deployment model a la Netflix or LinkedIn, you could do this adequately with JBoss.
I'm actually looking to do something along these lines here. It won't be true microservices but by packaging and deploying individual, properly versioned APIs rather than monolithic applications and following most of the other principles of microservice development (componentisation, business function focus, stateless etc.) we will be hopefully better able to manage and benefit from our APIs.
Our APIs will all be behind an API gateway and load balancer so we can choose how we distribute the microserves distributed across the JBoss instances and balance resource usage as required. Note that our organisation is relatively small and has relatively low and predictable traffic so this approach should work fine. Your needs however may be different.