Azure vs On-premise Service Fabric - azure-service-fabric

I have a bit of trouble finding differences about Azure and on-premise Service Fabric versions. I did read somewhere that on-premise version does not support auto-scaling, but this is easy to understand.
However, does on-premise version offer any type of operational capabilities such as resource managers, visual management of cluster, etc.?

The core Service Fabric platform is simply a runtime that gets installed on a set of virtual or physical machines. Once you tell those machines how to find each other, they form a cluster and provide a set of management capabilities that includes the Service Fabric Explorer UI, a REST API, and a TCP endpoint for PowerShell. All of that is common whether you're running on Azure, on-premises, or in another public cloud.
What's different in those environments is everything that lives outside of the machines that form the cluster. That includes:
Autoscaling
While Service Fabric can easily handle new machines being added and removed from the cluster, it has no knowledge of how that process actually works, so some external agent needs to handle it. In Azure, that's a virtual machine scale set.
Failure domain/Upgrade domain management
Good management of failure and upgrade domains is critical to ensuring availability and data reliability in Service Fabric. In Azure, clusters are automatically spread across FDs/UDs and maintenance is coordinated to avoid impact to your clusters. In other environments, this is your responsibility.
Cluster setup and management
In Azure, a Service Fabric cluster is a 1st class resource that can be created and managed through the Azure Resource Manager and the Azure portal. Outside of Azure, you must do that management using the cluster configuration JSON template.
Incidentally, just so there's no confusion since there are overloaded terms... you can't currently use the Azure Resource Manager (ARM) with Service Fabric outside of the Azure environment. However, Service Fabric's cluster resource manager is part of the core runtime and is available everywhere.
Diagnostics pipeline
By default, Service Fabric logging (on Windows) is done via ETW. However, without any component to pick up those events from the individual machines in the cluster and ship them somewhere for easy aggregation and inspection, the logs aren't very useful. In Azure, that process is handled by the Windows Azure Diagnostics (WAD) agent, whereas in other environments you are responsible for setting up that pipeline.

You don't get to use the resource manager on premises. You can access the Service Fabric Explorer at port 19080.
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-deploy-anywhere/
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-visualizing-your-cluster/
Powershell management & deployment will also work.

Related

AKS Hybrid setup

I have 1 master node & 2 worker nodes in the on-premise servers i.e. bare metal running kubernetes.
Considering that after few months, we might need more nodes. We will be using Azure going further for provisioning more nodes.
Can AKS work in combination with the on-prem machines, such that active master is in on-prem & the second master is in Azure, and the additional worker nodes can be scaled up/down in Azure?
Is it possible to achieve the below scenario, where on-prem & azure both can work together for the same K8s cluster? If yes, then any 3rd party tool is available for setting up as so and make life easy?
On-Premises
1 master & 2 worker nodes
+
AKS
1 master & 5 worker nodes (scale up/down)
As far as I know, today you can use the AKS engine to setup nodes on-prem only if you're using Azure Stack Hub which is an extension of Azure that can run workloads in an on-premises environment by providing Azure services in your datacenter.
Azure Arc can bring together two clusters but they won't operate as they were single cluster.
I found options for you to consider:
Running Kubernetes in a hybrid environment:
Setting up Kubernetes to work in an hybrid cloud environment is
absolutely possible today and many companies choose this path as a
progressive migration to Azure. You can benefit from the flexibility
and scalability of Azure, maintain existing systems running on your
local network, and get them to talk to eachother seamlessly. This
however still requires a non-negligible investment in the
infrastructure setup, and maintenance of it.
Azure Arc hybrid management and deployment for Kubernetes clusters:
You can use Azure Arc to register Kubernetes clusters hosted outside
of Microsoft Azure, and use Azure tools to manage these clusters
alongside clusters hosted in Azure Kubernetes Service (AKS).
The later one would require you to use Azure Arc.
I haven't used them myself but they seem to fit your use case.

Service Fabric Single Node SingleNodeClusterUpdateNotAllowed

I've got a single node service fabric instance hosted in Azure, just for testing purposes. When I try and upgrade the service fabric version to 7.0 from 6.5 I get the message:
SingleNodeClusterUpdateNotAllowed
Is there anything I can do to allow this?
The short answer is no.
The reason for this is that in order to upgrade service fabric has to takes down a node, updates and restarts it. This is repeated for all nodes until the update is complete. In a single node cluster this would mean taking the cluster offline completely. This is not allowed by the service fabric rules (at the very least one node must be available).
A single node 'cluster' therefore cannot update the platform or applications running on it.
The only way you can update a single node cluster is to delete and reinstall it. The same goes for applications (delete the application type before deploying an updated version). Depending on where you have the software deployed (development box, a server, azure) I would recommend scripting as much as possible. This will allow you to easily delete and redeploy. I am using a combination of an Azure template (arm), DevOps pipeline and script to initialise and load some default data into the application.

Azure Service Fabric - connect to local service fabric cluster from outside the VM it's running on?

We have a 5-node Azure Service Fabric Cluster as our main Production microservices hub. Up until now, for testing purposes, we've just been pushing out separate versions of our applications (the production application with ".Test" appended to the name) to that production SFC.
We're looking for a better approach, namely a separate test Service Fabric Cluster. But the issue comes down to costs. The smallest SFC you can create in Azure is 3 nodes. Further, you can't shutdown a SFC when it's not being used, which we would also need to do to save on costs.
So now I'm looking at just spinning up a plain Windows VM in Azure and installing the local Service Fabric Cluster app (which allows just one-node setup). Is it possible to do this and be able to communicate with the cluster from outside the VM?
What you are trying to accomplish is setup a standalone cluster. The steps to do it is documented in this docs.
Yes, you can access the cluster from outside the VM, In simple terms enable access to the network and open the firewall ports.
Technically both deployments(Guide and DevCluster) are very similar, the main difference is that you have better control on the templates following the standalone guide, using the development setup you don't have much options and all the process is automated.
PS: I would highly recommend you have a UAT\Staging cluster with the
exact same specs as the production version, the approach you used
could be a good idea for staging environment. Having different
environments increase the risk of issues, mainly related to
configuration and concurrency.

Difference between Kubernetes and Service Fabric

I have worked on Kubernetes and currently reading about Service Fabric, I know Service Fabric provides microservices framework models like stateful, stateless and actor but other than that it also provides GuestExecutables or Containers as well which is what Kubernetes also does manage/orchestrate containers. Can anyone explain a detailed difference between the two?
You can see in this project paolosalvatori/service-fabric-acs-kubernetes-multi-container-appthe same containers implemented both in Service Fabric, and in Kubernetes.
Their "service" (for external ingress access) is different, with Kubernetes being a bit more complete and diverse: see Services.
The reality is: there are "two slightly different offering" because of market pressure.
The Microsoft Azure platform, initially released in 2010, has implemented its own Microsoft Azure Fabric Controller, in order to ensure the services and environment do not fail if one or more of the servers fails within the Microsoft data center, and which also provides the management of the user's Web application such as memory allocation and load balancing.
But in order to attract other clients on their own Microsoft Data Center, they had to adapt to Kubernetes, released initially in 2014, which is now (2018) either adopted or closely considered by... pretty much everybody (as reported in late December)
(That does not mean one is "better" than the other,
only that the "other" is more "visible" than the first ;) )
So it is less about "a detailed difference between the two", and more about the ability to integrate Kubernetes-based system on Microsoft Data Centers.
This is in line (source: detailed here) with Microsoft continued its unprecedented shift toward an open (read: non-proprietary) staging platform for Azure (with Deis).
And Kubernetes orchestrator is available on Microsoft's Azure Container Service since February 2017.
You can see other differences in their architecture of a deployed application:
Service Fabric:
Vs. Kubernetes:
thieme mentions in the comments the article "Service Fabric and Kubernetes comparison, part 1 – Distributed Systems Architecture", from Marcin Kosieradzki.
Both are different. Kubernetes manages rkt or other containers.
Service Fabric is not for managing containers. In case it manages some, that does not make it its purpose. That does not enable it for a comparison with Kubernetes.
eg: When a pod dies Kubernetes puts it to other nodes immediately. The part of SF that manages containers does not do this, it is done by some other area of Service Fabric. And outside containers. And was not designed with containers in mind.

Team Services deploy to on-premise Service Fabric without exposed endpoint

We have a Service Fabric cluster on-premise and would like to deploy code to it from Visual Studio Team Services. We use this cluster for testing and it does not have an endpoint exposed to the outside world. It is only accessible internally from inside our network.
From inside Team Services the normal way to deploy a Service Fabric application is with the "Service Fabric Application Deployment" task. This task requires a "Cluster Connection" parameter, or link to the Service Fabric Service endpoint that Team Services can access. On this cluster I can't provide an endpoint to the outside world, so this method won't work.
Is there a good, accepted way of accomplishing this? I'm considering looking at having an Agent on one of the Service Fabric nodes that can run a PowerShell script as part of the build process. I can kick off a PowerShell script on the node as part of the build process. If I could retrieve the artifacts from Team Services with this script I believe the rest of the release would be relatively straightforward.
Is this a good line of thought, or is there a more straightforward way to deploy to Service Fabric from Team Services without exposing an endpoint?
We have the same set up and using VSTS. We set up a On-Prem agent pool where agent is within our network. The agent is hook with VSTS so build and release can be trigger from VSTS. Agent have access to the artifact on VSTS and can download it for deployment. The different might be we set up a service fabric end point instead of using powershell.
Its a very simple set up and works well for us.Good luck