I have a greenfield application that I would like to host on Service Fabric but high availability will be a key concern. Is there a way for a cluster to span across Azure region’s? Does Service Fabric recognize Availability Zones?
Is there a way for a cluster to span across Azure region’s?
Yes, it's in the FAQ.
The core Service Fabric clustering technology can be used to combine
machines running anywhere in the world, so long as they have network
connectivity to each other. However, building and running such a
cluster can be complicated.
If you are interested in this scenario, we encourage you to get in
contact either through the Service Fabric GitHub Issues List or
through your support representative in order to obtain additional
guidance. The Service Fabric team is working to provide additional
clarity, guidance, and recommendations for this scenario.
Does Service Fabric recognize Availability Zones?
Yes, but limited.
Make sure to check out this example (marked as 'not production ready'). And check out SF Mesh as well, which allows you to deploy to multiple AZ's. (but currently in preview)
Related
I am looking at some strategies how to make bidirectional communication of applications hosted on seperate clusters. Some of them are hosted in Service Fabric and the others are in Kubernetes. One of the options is to use a DNS service on the Service Fabric and the counterpart on Kubernetes. On the other hand the Reverse Proxy seems to be a way to go. After going through the options I was thinking...what is actually the best way to create microservices that can be deployed either in SF or in K8s without worrying about the communication model which requires least changes if we wish suddenly to migrate one app from SF to K8s but still making it avaiable to the SF apps and vice versa?
We have a 5-node Azure Service Fabric Cluster as our main Production microservices hub. Up until now, for testing purposes, we've just been pushing out separate versions of our applications (the production application with ".Test" appended to the name) to that production SFC.
We're looking for a better approach, namely a separate test Service Fabric Cluster. But the issue comes down to costs. The smallest SFC you can create in Azure is 3 nodes. Further, you can't shutdown a SFC when it's not being used, which we would also need to do to save on costs.
So now I'm looking at just spinning up a plain Windows VM in Azure and installing the local Service Fabric Cluster app (which allows just one-node setup). Is it possible to do this and be able to communicate with the cluster from outside the VM?
What you are trying to accomplish is setup a standalone cluster. The steps to do it is documented in this docs.
Yes, you can access the cluster from outside the VM, In simple terms enable access to the network and open the firewall ports.
Technically both deployments(Guide and DevCluster) are very similar, the main difference is that you have better control on the templates following the standalone guide, using the development setup you don't have much options and all the process is automated.
PS: I would highly recommend you have a UAT\Staging cluster with the
exact same specs as the production version, the approach you used
could be a good idea for staging environment. Having different
environments increase the risk of issues, mainly related to
configuration and concurrency.
I am looking at ways to shift from our monolith system to more flexible microservice based, and from managing the application(containerized) standpoint, Kubernetes comes as the frontrunner.
In our ecosystem, there are some hardware devices that are to be a part as it is. Understanding Kubernetes (in the limited time) do not provide me a clear-cut way if managing the HW is a possibility with Kubernetes or not. I explored CRDs, Addons etc., but those approaches did not look promising to my use case of managing HW nodes.
My use case for managing HW nodes include:
1. Discovery of HW devices by K8s
2. Possibly managing them over REST API through K8s.
* High availability of HW devices is not in scope, however, any thoughts are welcome.
Kubernetes was developed as an automation deployment, orchestration and management tool for containerised applications. However, its role does not involve managing and discovering Hardware nodes, because Kubernetes implements own internal structure components by populating services across the Nodes in the cluster.
You can consider launching support for various compute, storage, etc. devices that require specific setup in Kubernetes cluster within Device plugin framework.
I have worked on Kubernetes and currently reading about Service Fabric, I know Service Fabric provides microservices framework models like stateful, stateless and actor but other than that it also provides GuestExecutables or Containers as well which is what Kubernetes also does manage/orchestrate containers. Can anyone explain a detailed difference between the two?
You can see in this project paolosalvatori/service-fabric-acs-kubernetes-multi-container-appthe same containers implemented both in Service Fabric, and in Kubernetes.
Their "service" (for external ingress access) is different, with Kubernetes being a bit more complete and diverse: see Services.
The reality is: there are "two slightly different offering" because of market pressure.
The Microsoft Azure platform, initially released in 2010, has implemented its own Microsoft Azure Fabric Controller, in order to ensure the services and environment do not fail if one or more of the servers fails within the Microsoft data center, and which also provides the management of the user's Web application such as memory allocation and load balancing.
But in order to attract other clients on their own Microsoft Data Center, they had to adapt to Kubernetes, released initially in 2014, which is now (2018) either adopted or closely considered by... pretty much everybody (as reported in late December)
(That does not mean one is "better" than the other,
only that the "other" is more "visible" than the first ;) )
So it is less about "a detailed difference between the two", and more about the ability to integrate Kubernetes-based system on Microsoft Data Centers.
This is in line (source: detailed here) with Microsoft continued its unprecedented shift toward an open (read: non-proprietary) staging platform for Azure (with Deis).
And Kubernetes orchestrator is available on Microsoft's Azure Container Service since February 2017.
You can see other differences in their architecture of a deployed application:
Service Fabric:
Vs. Kubernetes:
thieme mentions in the comments the article "Service Fabric and Kubernetes comparison, part 1 – Distributed Systems Architecture", from Marcin Kosieradzki.
Both are different. Kubernetes manages rkt or other containers.
Service Fabric is not for managing containers. In case it manages some, that does not make it its purpose. That does not enable it for a comparison with Kubernetes.
eg: When a pod dies Kubernetes puts it to other nodes immediately. The part of SF that manages containers does not do this, it is done by some other area of Service Fabric. And outside containers. And was not designed with containers in mind.
I have a series of WebApi self hostable services that I need to make available both on-premise and the internet. Currently they are only on-premise, but I was wondering will service fabric allow me to have an on premise cluster and azure hosted cluster connected and handle this hybrid scenario? Can I have a service fabric cluster with nodes both on premise and in azure?
I have it on my backlog to explore leveraging service fabric, but if this scenario was available we would bump up that priority.
Any details on implementing this or even an alternative solution would be greatly appreciated. We tried using Azure App Proxy as well for the internet exposure, but are having problems with the authentication headers going across as we are not using Azure AD.
It's possible to create a cluster that spans multiple locations. Like mentioned this article.
However you should realize that it's not a supported feature. If you make a mistake, loosing one of two locations will result in data loss.
I'd recommend using one cluster.