Programmatically stop Azure Service Fabric service - azure-service-fabric

We have several services running in an ASF cluster. Is it possible to take pause/stop/restart a service in an ASF cluster across all nodes programmatically?

At the moment you cannot stop a 'service' in Service Fabric. You can only remove it. But you can start/stop, enable/disable nodes within the cluster.
There is a enhancement request in Azure feedback forums to have the service start/stop feature. Vote for it and wait for it to be implemented.
https://feedback.azure.com/forums/293901-service-fabric/suggestions/13714473-ability-to-stop-disable-services-without-removing

Related

How to create a multi-master cluster in Azure

I need to create an Azure Kubernetes Service with 3 master nodes. So far I used to work with single master cluster, now I am in need of creating a multi-master cluster for production environments.
Can I get a way to create an AKS with multiple control planes. Thanks in Advance.
As Soundarya mentioned in the comment, the solution could be fould here:
As your ask is on AKS (Managed service from Azure) with HA enabled Clusters you already have more than one master running. As AKS is a managed offering service you will will not have the visibility/control on this.
Can I get a way to create an AKS with multiple control planes?
For this you can check the AKS Uptime SLA, Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters.
Please check this document for more details.
If you are using AKS Engine (unmanaged service), then you can specify the number of masters. Please refer to this document for more details.

Flink native kubernetes deployment

I have some limitations with the rights required by Flink native deployment.
The prerequisites say
KubeConfig, which has access to list, create, delete pods and **services**, configurable
Specifically, my issue is I cannot have a service account with the rights to create/remove services. create/remove pods is not an issue. but services by policy only can be created within an internal tool.
could it be any workaround for this?
Flink creates two service in native Kubernetes integration.
Internal service, which is used for internal communication between JobManager and TaskManager. It is only created when the HA is not enabled. Since the HA service will be used for the leader retrieval when HA enabled.
Rest service, which is used for accessing the webUI or rest endpoint. If you have other ways to expose the rest endpoint, or you are using the application mode, then it is also optional. However, it is always be created currently. I think you need to change some codes to work around.

How to collect SF reverse proxy logs on-prem

I have 3-node on-prem cluster. Now i want to collect and analyze reverse proxy logs (and other system service fabric logs). I google and found this article and it says
Refer to Collect reverse proxy events to enable collecting events from
these channels in local and Azure Service Fabric clusters.
But that link describes how to enable, configure and collect reverse proxy logs for clusters in Azure. And I don't understand how to do it on-prem.
Please, help!
Service Fabric Events are just ETW events, you have the option to use the Builtin mechanism to collect and forward these events to a Monitoring Application like Windows Azure Diagnostics, or you can build your own.
If you decide to follow the approach in the documents, it will work on azure or On premises, the only caveat is that On-Premises it will send the logs to Azure, but it will work the same way.
On Premises, another way is build you own collector using EventFlow, you can configure EventFlow to collect the ReverseProxy ETW events and then forward these to ELK or any other monitoring platform.

Is Service Fabric hybrid on premise and internet exposed solution currently possible?

I have a series of WebApi self hostable services that I need to make available both on-premise and the internet. Currently they are only on-premise, but I was wondering will service fabric allow me to have an on premise cluster and azure hosted cluster connected and handle this hybrid scenario? Can I have a service fabric cluster with nodes both on premise and in azure?
I have it on my backlog to explore leveraging service fabric, but if this scenario was available we would bump up that priority.
Any details on implementing this or even an alternative solution would be greatly appreciated. We tried using Azure App Proxy as well for the internet exposure, but are having problems with the authentication headers going across as we are not using Azure AD.
It's possible to create a cluster that spans multiple locations. Like mentioned this article.
However you should realize that it's not a supported feature. If you make a mistake, loosing one of two locations will result in data loss.
I'd recommend using one cluster.

Application insights and service fabric?

I found this from several months back on Application Insights and Service Fabric and I'm wondering if there is any new information.
I would really like to get CPU, Memory, Storage and other metrics out of service fabric and the reliable actors. Having it presented in a user friendly HUD like app insights provides would be awesome!
Thanks!
On the azure portal, you can now create a resource called 'Service Fabric Analytics' to create a nice dashboard for your cluster. Configure as cluster like described here. It's OMS based, not Appinsights though.
The Service Fabric Solution helps identify and troubleshoot issues
across your Service Fabric cluster, by providing visibility into how
your Service Fabric virtual machines are performing and how your
applications and micro-services are running. Available features
include: • Get insight into your Service Fabric framework • Get
insight into the performance of your Service Fabric applications and
micro-services • View events from your applications and micro-services
Data collected: Service Fabric Reliable Service Events, Service Fabric
Actor Events, Service Fabric Operational Events, Event tracing for
Windows events and Windows event logs. Requirements: This solution
will only work if you have set up Azure Diagnostics on your Service
Fabric VMs, and have configured OMS to collect data for your WAD
tables.
In service fabric with Eventflow https://github.com/Azure/diagnostics-eventflow you had a option to send the diagnostics data to multiple data stores like WAD Tables, OMS, elastic search and application insights.
Have a look at it. It is really straight forward and easily integrate with ETW events and will serve your purpose.