Can I deploy/add a service fabric stateless service to participate in the existing cluster? - azure-service-fabric

I want the ability for clients to create their own stateless services and be able to upload/publish it to join an existing cluster. Is this doable? I understand that I need to update the application manifests dynamically but not sure how or if this is possible programmatically without side effects of the service fabric runtime processes.
The workflow is to upload the code (zipped file maybe or whatever) via an API gateway.

The first thing to keep in mind is that you do not deploy individual services to a Service Fabric cluster. You deploy applications, which can contain one or more services.
So the key question to ask is whether you need the new code to be integrated with an existing application type or not. It sounds like what you're trying to do is just enable multiple clients to deploy independent applications on a shared Service Fabric cluster, in which case you would not be modifying existing application types, but deploying entirely new ones.
Thus, you would need your API gateway to dynamically generate application and service manifests, combine them with the client-provided code to create an application package, then copy, register, and create those applications in the cluster. As far as the Service Fabric runtime is concerned, this looks no different than if you had deployed an application type built and packaged in Visual Studio. Processes running existing applications are not impacted.

Related

How to manage dependencies between different applications services while deploying on Kubernetes?

There are web applications - WebAppA and WebAppB. Each web application depend on a Postgres database. We want to ship these applications to a customer who will deploy the applications on its own k8s cluster.
We want to create three packages - "WebappA", "WebAppB" and "Datastore". The webapp itself made of multiple services, not mentioning for the sake of simplicity.
We want to provide apt-get/brew/yum kind of experience, so that customer can deploy one or both the applications like al-carte. Most importantly while deploying, it should identify if the common package "DataStore" is running and not spin off yet another Postgres instance.
Is there any to package applications as packages for Kubernetes which can make installation easy with dependency handling?
Of course! One way to start would be using Helm charts. You can read more about them here.
Helm defines dependency relationships declaratively using charts, allows you to manipulate/maintain dependencies simply by managing some YAML manifests. It also allows you to have personalised repositories where you can put your images to be fetched from. It's really nice.

Containers (Kubernetes) vs Web service (REST APIs)

I have a single screen desktop application developed in Java. It is a tool to convert files, given a file in .abc format, the tool converts it to .xyz format. Basically the tool works offline and acts as a translator to convert file from one form to another.
So now, to improve the infrastructure, there are discussions to move the tool to Kubernetes or to provide REST services for the file conversion. I completely have no idea about the containers nor the REST APIs as I am a front-end developer.
More about the tool, as I told earlier, the tool is a single page application, very light doing very minimal job, totally used by 200 users approximately. So, this being the shape and size of the application, which one would be the best approach to go with and why? Basically, I am looking for a short evaluation report of Kubernetes vs REST service and architecture recommendation with reasons.
Currently your application is a standalone application which is quite an old concept.
I can mention high-level changes needs to be done when your file conversion logic would be exposed over Rest Api in Kubernetes world.
you can go through one by one following mentioned areas to get a better understanding design-wise:
java code would be a backend code and its public methods that take inputs from UI actions will be exposed over rest API.
There are multiple rest API's (jersey, rest easy, etc or spring/spring-boot framework also provides rest API support) that you can go through any of them to get an understanding.
once your backend is exposed over the rest API then it needs to be containerized means your backend will be running under the container. Can go through docker documentation and can build one sample containerized app. There is huge material present in this area.
once your backend is containerized then it will be installed in a Kubernetes cluster
Kubernetes is basically a container orchestration tool and it's quite a wide thing. you can through its official documentation for basic understanding.
SPA will be running on a client machine like today also you are able to launch from your desktop but it will communicate with the Kubernetes cluster where your application is presently packaged in a container.
References:
docker :
https://docs.docker.com/
Kubernetes :
https://kubernetes.io/

Application and service(s) deployment in Azure Service Fabric

I am not clear enough yet how Service Fabric allows deployment.
From the applications being created in a single VS solution, let me try to ask with file formats for better understanding.
In a single Visual Studio solution, there are
a single .sln
a single .sfproj
multiple .csproj(s)
As I see these files, multiple services (.csproj files) are bound to a single Service Fabric application (.sfproj file), which is under single solution file (.sln file).
Can I individually deploy a .csproj project to the Service Fabric cluster, or are these now bound to a .sfproj so that I have to deploy multiple services (each created with .csproj and bound to .sfproj) together?
The answer to your question is yes and no at the same time. Let me explain it in detail.
Can I individually deploy .csproj project to the Service Fabric cluster
The answer is no you can't deploy a service - in term of Service Fabric the minimal unit of deployment is the application (the .sfproj one). So no matter what changes you have you still need to deploy the application.
But as we all understand performing a full deployment of all application services is very hard, consumes lots of time and causes lots of disturbance to the cluster. To avoid this massive update, all Service Fabric components have their own versions (you can take a closer look at ServiceManifest.xml and ApplicationManifest.xml). So each time application is deployed to the cluster, Service Fabric goes through all services included in the application and updates only components that have been changed (i.e. have different version).
This approach allows you to perform updates of very high granularity i.e. you can update only <Config /> package of the single service.

Adding a Service Fabric service to System Application

Is there a way to add a service to the System application of the SF or is that a no go scenario? What I'd like to do is to have a meta-data services about other applications in the cluster, where the service can be queried.
The system services are built into the cluster and are not extensible. You can certainly build an application that offers centralized metadata about other applications/services in the cluster but you would need to maintain that metadata yourself.

Micro services with JBOSS

I am new to Jboss, want to know if micro services architecture is a right choice on JBOSS. I cannot change the application server as it is decided by client architect and I have no choice.
Want to know whether we can develop micro services with underlying JBOSS application server.
I understand Spring boot comes with embedded tomcat container, which makes it flexible to stop and start, deploy individual service with no impact to other services.
However will that architecture works with JBoss too.
Please suggest.
Thanks,
I actually developed a feasibility study to investigate the solution you mentioned. My conclusion is that it is totally viable to use Micro Service principles in a JBoss Platform.
I used the combination of JBoss \ Spring Boot \ Netflix to create successful Micro Service stack, I personally do that to find a solution to the transaction problem (multiple micro services collaborating) and the fan out problem which caused because excessive Network communication and Serialization costs.
I also wrote a blog about the subject, you might find more details there if you like to, here is the link.
Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka
By the definition what micro services are, then conceptually yes. A micro service is a service that is an independent unit, it could deployed, updated, and undeployed independently without affecting any unrelated part of your application. So that would mean having multiple instances of JBoss for MS and your application calling them through some sort of gateway or any other mechanism depending on your use case. If you plan to deploy all your MS in the same JBoss instance then it defeats the very purpose of a MS. Given that, JBoss wouldn't be a right choice for MS deployment because it will only make your MS deployment infrastructure quite heavy.
Depending on what your client's requirements are, your could possibly keep your webapp in JBoss and deploy your MS containers separately.
It depends on what you want to get out of microservices.
Some of the developers at my organisation looked at Spring Boot but concluded that it's best off being run as a standalone container rather than in JBoss, otherwise you've effectively got two container frameworks competing (SB and JBoss) and a range of associated issues.
Deploying microservices in JBoss won't give you the same flexibility as a true container system like Docker. With Docker you create standalone packages for your microservices that contain all the code, system tools, runtime environment, etc. It can be as small or large as it needs to be. JBoss on the other hand is a large container running a single JVM designed to hold multiple applications. The level of isolation is not the same, and it's not efficient to have JBoss as a container for a single microservice so you have to appropriately size and then deploy to the instance to make use of the resources it has available.
If you're looking at microservices as a way to gain greater control over service lifecycle management (deployment, versioning, deprecating, etc.) as opposed to an automated, web-scale component deployment model a la Netflix or LinkedIn, you could do this adequately with JBoss.
I'm actually looking to do something along these lines here. It won't be true microservices but by packaging and deploying individual, properly versioned APIs rather than monolithic applications and following most of the other principles of microservice development (componentisation, business function focus, stateless etc.) we will be hopefully better able to manage and benefit from our APIs.
Our APIs will all be behind an API gateway and load balancer so we can choose how we distribute the microserves distributed across the JBoss instances and balance resource usage as required. Note that our organisation is relatively small and has relatively low and predictable traffic so this approach should work fine. Your needs however may be different.