I have a bunch of services deploying as guest executables to service fabric and all seems fine. I was wondering if there was any point in porting the services to be native Fabric Service services.
Looking at the documentation I can't seem to find any benefits of having implementing them as such, am I missing something obvious?
If your services are stateless there is probably no compelling reason to migrate them into native stateless services. It could be different if your services were stateful; in this context I mean that they store some state inside the process.
The state in native stateful services is stored redundantly, so your services can cope with node failures. This could increase the resilience of your service. In general, you usually create native services in green field situations and rely on guest executables and containers in migration/hybrid situations.
Guest Executable is missing out on some advanced features but it is up to you to decide if you need them.
Benefits of running a guest executable in Service Fabric
There are several advantages to running a guest executable in a Service Fabric:
High availability. Applications that run in Service Fabric are made highly available. Service Fabric ensures that instances of an application are running.
Health monitoring. Service Fabric health monitoring detects if an application is running, and provides diagnostic information if there is a failure.
Application lifecycle management. Besides providing upgrades with no downtime, Service Fabric provides automatic rollback to the previous version if there is a bad health event reported during an upgrade.
Density. You can run multiple applications in a cluster, which eliminates the need for each application to run on its own hardware.
Discoverability: Using REST you can call the Service Fabric Naming service to find other services in the cluster.
I.e. There is something called Stateless Reliable Services - http://www.jamessturtevant.com/posts/Service-Fabric-Service-Types/
The above link will explain it more.
Related
I am looking at some strategies how to make bidirectional communication of applications hosted on seperate clusters. Some of them are hosted in Service Fabric and the others are in Kubernetes. One of the options is to use a DNS service on the Service Fabric and the counterpart on Kubernetes. On the other hand the Reverse Proxy seems to be a way to go. After going through the options I was thinking...what is actually the best way to create microservices that can be deployed either in SF or in K8s without worrying about the communication model which requires least changes if we wish suddenly to migrate one app from SF to K8s but still making it avaiable to the SF apps and vice versa?
In the Kubernetes world, a typical/classic pattern is using Deployment for Stateless Applications and using StatefulSet for a stateful application.
I am using a vendor product (Ping Access) which is meant to be a stateless application (it plays the role of a Proxy in front of other Ping products such as Ping Federate).
The github repo for Ping Cloud (where they run these components as containers) shows them running Ping Access (a stateless application) as a Stateful Set.
I am reaching out to their support team to understand why anyone would run a Stateless application as a StatefulSet.
Are there other examples of such usage (as this appears strange/bizarre IMHO)?
I also observed a scenario where a customer is using a StatefulApp (Ping Federate) as a regular deployment instead of hosting them as a StatefulSet.
The Ping Cloud repository does build and deploy Ping Federate as a StatefulSet.
Honestly, both these usages, running a stateless app as a StatefulSet (Ping Access) and running a stateful app as a deployment (Ping Federate) sound like classic anti-patterns.
Apart from the ability to attach dedicated Volumes to StatefulSets you get the following features of which some might be useful for stateless applications:
Ordered startup and shutdown of Pods with K8s doing them one by one in an ordered fashion.
Possibility to guarantee that not more than a single Pod is running at a time even during unscheduled Pod restarts.
Stable DNS names for Pods.
I can only speculate, why Ping Federate uses a StatefulSet. Possibly, it has to do with access limitations of the downstream services it connects to.
The consumption of PingAccess is stateless, but the operation is very much stateful. Namely, the PingAccess admin console maintains a database for configuration, and part of that configuration includes clustered engine mapping and session keys.
Thus, if you were to take away the persistent volume, restarting the admin console would decouple all the engines in the cluster. Then the engines no longer receive configuration.. and web session keys would be mismatched.
The ping-cloud-base repo uses StatefulSet for engines not for persistent volumes, but for sts naming scheme. I personally disagree with this and recommend using Deployment for engines. The only downside is you then have to remove orphaned engines from the admin configuration. Orphaned engines meaning engine config that stays in the admin console db after the engine deployment is rolled/updated. These can be removed from the admin UI, or API. Pretty easy to script in a hook.
It would be ideal for an application that is not a datastore to run without persistent volume, but for the reasons mentioned above, the PingAccess admin console does require and act like a persistent datastore so I think StatefulSet is okay.
Finally, the Ping DevOps team focuses support on their helm chart (where engines are also deployments by default). I'd suspect the community and enterprise support is much larger there for folks deploying on their own. ping-cloud-base is a good place to pick up some hooks though.
I'm investigating converting my service fabric application to kubernetes. One area I'm struggling with, is what the equivalent of service fabric actors is in kubernetes? I mainly use reminder actors in the application, is there an equivalent in kubernetes?
To use actor programming model on k8s you should use the combination of actor framework (orleans or akka.net) and k8s stateful sets.
Talk from Oslo NDC 2019 Real-Time, Distributed Applications with Akka.NET, Kubernetes and .NET Core
Have a look at dapr runtime. It is heavily influenced by Microsoft's learning of Service Fabric Reliable Services/Actors and runs on top of Kubernetes (and others).
Take into account that it is still in development and not yet released for production scenarios.
I have worked on Kubernetes and currently reading about Service Fabric, I know Service Fabric provides microservices framework models like stateful, stateless and actor but other than that it also provides GuestExecutables or Containers as well which is what Kubernetes also does manage/orchestrate containers. Can anyone explain a detailed difference between the two?
You can see in this project paolosalvatori/service-fabric-acs-kubernetes-multi-container-appthe same containers implemented both in Service Fabric, and in Kubernetes.
Their "service" (for external ingress access) is different, with Kubernetes being a bit more complete and diverse: see Services.
The reality is: there are "two slightly different offering" because of market pressure.
The Microsoft Azure platform, initially released in 2010, has implemented its own Microsoft Azure Fabric Controller, in order to ensure the services and environment do not fail if one or more of the servers fails within the Microsoft data center, and which also provides the management of the user's Web application such as memory allocation and load balancing.
But in order to attract other clients on their own Microsoft Data Center, they had to adapt to Kubernetes, released initially in 2014, which is now (2018) either adopted or closely considered by... pretty much everybody (as reported in late December)
(That does not mean one is "better" than the other,
only that the "other" is more "visible" than the first ;) )
So it is less about "a detailed difference between the two", and more about the ability to integrate Kubernetes-based system on Microsoft Data Centers.
This is in line (source: detailed here) with Microsoft continued its unprecedented shift toward an open (read: non-proprietary) staging platform for Azure (with Deis).
And Kubernetes orchestrator is available on Microsoft's Azure Container Service since February 2017.
You can see other differences in their architecture of a deployed application:
Service Fabric:
Vs. Kubernetes:
thieme mentions in the comments the article "Service Fabric and Kubernetes comparison, part 1 – Distributed Systems Architecture", from Marcin Kosieradzki.
Both are different. Kubernetes manages rkt or other containers.
Service Fabric is not for managing containers. In case it manages some, that does not make it its purpose. That does not enable it for a comparison with Kubernetes.
eg: When a pod dies Kubernetes puts it to other nodes immediately. The part of SF that manages containers does not do this, it is done by some other area of Service Fabric. And outside containers. And was not designed with containers in mind.
I found this from several months back on Application Insights and Service Fabric and I'm wondering if there is any new information.
I would really like to get CPU, Memory, Storage and other metrics out of service fabric and the reliable actors. Having it presented in a user friendly HUD like app insights provides would be awesome!
Thanks!
On the azure portal, you can now create a resource called 'Service Fabric Analytics' to create a nice dashboard for your cluster. Configure as cluster like described here. It's OMS based, not Appinsights though.
The Service Fabric Solution helps identify and troubleshoot issues
across your Service Fabric cluster, by providing visibility into how
your Service Fabric virtual machines are performing and how your
applications and micro-services are running. Available features
include: • Get insight into your Service Fabric framework • Get
insight into the performance of your Service Fabric applications and
micro-services • View events from your applications and micro-services
Data collected: Service Fabric Reliable Service Events, Service Fabric
Actor Events, Service Fabric Operational Events, Event tracing for
Windows events and Windows event logs. Requirements: This solution
will only work if you have set up Azure Diagnostics on your Service
Fabric VMs, and have configured OMS to collect data for your WAD
tables.
In service fabric with Eventflow https://github.com/Azure/diagnostics-eventflow you had a option to send the diagnostics data to multiple data stores like WAD Tables, OMS, elastic search and application insights.
Have a look at it. It is really straight forward and easily integrate with ETW events and will serve your purpose.