MSI / Managed Service Identity) for Service Fabric Cluster - azure-service-fabric

Looking at the new functionality called MSI (Managed Service Identity)
Is it possible to use MSI inside VM scale sets or even better inside an Azure Service Fabric Cluster? I guess using the resource manager it might be possible, but just want to hear a confirming answer about that.
As I want to access a keyvault it would be very nice to be able to use MSI from inside a microservice running inside a Service Fabric cluster.

This is a very old question - but the answer is "Yes", now. Both via ARM template or the portal, you can assign a SystemIdentity or 1+ User identities to your VMSS

Related

Service Fabric Powershell from Azure Devops

I am able to successfully deploy Service Fabric services to my local cluster from Azure Devops using the ServiceFabricDeploy task with a configured service connection. What I need is the ability to run some arbitrary powershell scripts against the fabric in order to perform other maintenance tasks that I want to automate via CI/CD.
How can I get a normal inline powershell task connected to my local fabric so I can interact with the cluster?
You can use the SF PowerShell module for that.
First connect to the cluster.
Next, manage the cluster using the provided functions.
Under water, these commands use the REST API of SF. Therefore you can't just run arbitrary code.
If you want to do that, you'll need to use SSH or something like PowerShell remoting.
More info on how to set it up in the Load Balancer here.

Service Fabric Single Node SingleNodeClusterUpdateNotAllowed

I've got a single node service fabric instance hosted in Azure, just for testing purposes. When I try and upgrade the service fabric version to 7.0 from 6.5 I get the message:
SingleNodeClusterUpdateNotAllowed
Is there anything I can do to allow this?
The short answer is no.
The reason for this is that in order to upgrade service fabric has to takes down a node, updates and restarts it. This is repeated for all nodes until the update is complete. In a single node cluster this would mean taking the cluster offline completely. This is not allowed by the service fabric rules (at the very least one node must be available).
A single node 'cluster' therefore cannot update the platform or applications running on it.
The only way you can update a single node cluster is to delete and reinstall it. The same goes for applications (delete the application type before deploying an updated version). Depending on where you have the software deployed (development box, a server, azure) I would recommend scripting as much as possible. This will allow you to easily delete and redeploy. I am using a combination of an Azure template (arm), DevOps pipeline and script to initialise and load some default data into the application.

Team Services deploy to on-premise Service Fabric without exposed endpoint

We have a Service Fabric cluster on-premise and would like to deploy code to it from Visual Studio Team Services. We use this cluster for testing and it does not have an endpoint exposed to the outside world. It is only accessible internally from inside our network.
From inside Team Services the normal way to deploy a Service Fabric application is with the "Service Fabric Application Deployment" task. This task requires a "Cluster Connection" parameter, or link to the Service Fabric Service endpoint that Team Services can access. On this cluster I can't provide an endpoint to the outside world, so this method won't work.
Is there a good, accepted way of accomplishing this? I'm considering looking at having an Agent on one of the Service Fabric nodes that can run a PowerShell script as part of the build process. I can kick off a PowerShell script on the node as part of the build process. If I could retrieve the artifacts from Team Services with this script I believe the rest of the release would be relatively straightforward.
Is this a good line of thought, or is there a more straightforward way to deploy to Service Fabric from Team Services without exposing an endpoint?
We have the same set up and using VSTS. We set up a On-Prem agent pool where agent is within our network. The agent is hook with VSTS so build and release can be trigger from VSTS. Agent have access to the artifact on VSTS and can download it for deployment. The different might be we set up a service fabric end point instead of using powershell.
Its a very simple set up and works well for us.Good luck

Azure Service Fabric its purpose of different Services

Azure Service Fabric whats the purpose of different Services ,please give with examples and also step by step process for deployment
I think you need to take a look at the service fabric documentation.
Supported services
- Reliable Services, Reliable Actors, Guest Executable
Deployment - PowerShell, Visual Studio

Azure vs On-premise Service Fabric

I have a bit of trouble finding differences about Azure and on-premise Service Fabric versions. I did read somewhere that on-premise version does not support auto-scaling, but this is easy to understand.
However, does on-premise version offer any type of operational capabilities such as resource managers, visual management of cluster, etc.?
The core Service Fabric platform is simply a runtime that gets installed on a set of virtual or physical machines. Once you tell those machines how to find each other, they form a cluster and provide a set of management capabilities that includes the Service Fabric Explorer UI, a REST API, and a TCP endpoint for PowerShell. All of that is common whether you're running on Azure, on-premises, or in another public cloud.
What's different in those environments is everything that lives outside of the machines that form the cluster. That includes:
Autoscaling
While Service Fabric can easily handle new machines being added and removed from the cluster, it has no knowledge of how that process actually works, so some external agent needs to handle it. In Azure, that's a virtual machine scale set.
Failure domain/Upgrade domain management
Good management of failure and upgrade domains is critical to ensuring availability and data reliability in Service Fabric. In Azure, clusters are automatically spread across FDs/UDs and maintenance is coordinated to avoid impact to your clusters. In other environments, this is your responsibility.
Cluster setup and management
In Azure, a Service Fabric cluster is a 1st class resource that can be created and managed through the Azure Resource Manager and the Azure portal. Outside of Azure, you must do that management using the cluster configuration JSON template.
Incidentally, just so there's no confusion since there are overloaded terms... you can't currently use the Azure Resource Manager (ARM) with Service Fabric outside of the Azure environment. However, Service Fabric's cluster resource manager is part of the core runtime and is available everywhere.
Diagnostics pipeline
By default, Service Fabric logging (on Windows) is done via ETW. However, without any component to pick up those events from the individual machines in the cluster and ship them somewhere for easy aggregation and inspection, the logs aren't very useful. In Azure, that process is handled by the Windows Azure Diagnostics (WAD) agent, whereas in other environments you are responsible for setting up that pipeline.
You don't get to use the resource manager on premises. You can access the Service Fabric Explorer at port 19080.
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-deploy-anywhere/
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-visualizing-your-cluster/
Powershell management & deployment will also work.