I have an infrastructure pipeline in Azure DevOps that setup a service fabric cluster. The fabric cluster's key vault and certificate are generated by a custom PowerShell script I wrote, while the cluster and its resources are generated by an ARM template. They are executed as tasks in the pipeline.
However, I am having problems accessing the generated fabric cluster via Service Fabric Explorer using web address: https://myclustername.mylocation.cloudapp.azure.com:19080/Explorer
I also have problem accessing the nodes using RDP:
I tried the following:
Restarted virtual machine scale set.
Checked the ports are open on load balancer: 3389-3391
I don't get this problem if I setup the fabric resource group with visual studio.
I checked the certificate generated and made sure it has a subject that is the same as the fabric cluster url. Does anyone know what might be causing this?
I think I used the ARM template that came with visual studio 2017. The template is old (2016). I updated the template to the recent one and the issue is resolved:
https://github.com/Azure/azure-quickstart-templates/tree/master/service-fabric-secure-cluster-5-node-1-nodetype
I still have some problem for adding a custom internal load balancer, but all the other resources out of the ARM template are working.
Related
I installed service fabric locally and set up five cluster node. I followed this tutorial and downloaded source code from here. It worked fine. Now I created one .net core application (docker enabled) API with visual studio 2019. I published image on docker hub (I ran image locally to confirm it is working fine.). Then I create service fabric project through visual studio and add container to it. I published on local cluster of five nodes. First one is already running. It is build and published successfully. But on service fabric explorer, it is showing me this error.
I am not clear enough yet how Service Fabric allows deployment.
From the applications being created in a single VS solution, let me try to ask with file formats for better understanding.
In a single Visual Studio solution, there are
a single .sln
a single .sfproj
multiple .csproj(s)
As I see these files, multiple services (.csproj files) are bound to a single Service Fabric application (.sfproj file), which is under single solution file (.sln file).
Can I individually deploy a .csproj project to the Service Fabric cluster, or are these now bound to a .sfproj so that I have to deploy multiple services (each created with .csproj and bound to .sfproj) together?
The answer to your question is yes and no at the same time. Let me explain it in detail.
Can I individually deploy .csproj project to the Service Fabric cluster
The answer is no you can't deploy a service - in term of Service Fabric the minimal unit of deployment is the application (the .sfproj one). So no matter what changes you have you still need to deploy the application.
But as we all understand performing a full deployment of all application services is very hard, consumes lots of time and causes lots of disturbance to the cluster. To avoid this massive update, all Service Fabric components have their own versions (you can take a closer look at ServiceManifest.xml and ApplicationManifest.xml). So each time application is deployed to the cluster, Service Fabric goes through all services included in the application and updates only components that have been changed (i.e. have different version).
This approach allows you to perform updates of very high granularity i.e. you can update only <Config /> package of the single service.
I am using a library which searches in registry for a dll. That dll can be installed by running MSI in the Service Fabric cluster and this path will be set.
But I wanted to avoid the installation of MSI in the cluster, and provided the required dlls in the package itself. During start up of the service, I am creating the registry entry and giving the location of the dll in my package. Everything is working as expected.
Is this approach ideal? Are we allowed to make changes to registry? If not, how do we solve this problem? Any pointers are appreciated.
If the library has to use the registry, there is nothing you can do about it other than register the values. If you could change the DLL to retrieve this information from the configuration file would be the ideal solution.
You can do it in SF, the right way to do it is using the SetupEntryPoint option of the ServiceManifest to do these management tasks, and from the Application manifest you can set the policies to specify which user you should run these policies. it is described here with more details
The main issue you have on SF with this approach is that you application might move around the cluster and you have to register it on every node, and maybe also remove it when the application is not running there anymore to avoid garbage in the registry.
We have now come to the point in our Service Fabric application development where we need to add a custom parameter that can be overridden at run time (as described in https://azure.microsoft.com/en-us/documentation/articles/service-fabric-manage-multiple-environment-app-configuration/). We're still in the development stage...no azure production environment, yet, so this question mainly concerns running the service fabric cluster from visual studio or thru a powershell script in a VM. I know that Deploy-FabricApplication.ps1 is ran during debug and per its usage instructions in that file I can override custom parameters. However, I can't seem to figure out where I do that in Visual Studio so that when different developers start a debug session they can set the custom parameter value to whatever makes sense in their dev environment. Any ideas? We have a task to research how to better handle secrets storage but we're not quite there, yet.
You can add multiple publish profiles (optionally without checking them in). One for every developer if needed.
For secrets: you can encrypt settings and/or use Azure Key Vault combined with a Service Principal, similar to what is shown here.
I am reading the docs about service fabric and was also interested to review how to setup a cluster with multiple VM's, but so far I can only guess based on the devclustersetup.ps1 / it's xml file, but I didn't see any docs on it which explains the various configurations and/or API's.
What I would need is how to set up a simple cluster, how to add/remove nodes, monitoring, setting up resource constraints per node etc so I can setup a sample cluster and test few things.
So far I've done these:
installed VC runtime ( as web pi installer fails without it )
installed service fabric and the SDK ( got the installer out of the web pi installers )
tried to change the sample xml, adding multiple hosts, but then with that I ran into the IPv6 only issue in my setup ( see my other question ), so it didnt work out
Thanks
We are working on NanoServer support for Service Fabric. (I am unable to respond to the comment asking about it because I apparently don't have enough points).
Setting up a multi-machine cluster is not supported at this time so you won't find any documentation explaining how to do it.
There will be a public preview of the service in Azure later this year and the platform will be available as part of Windows Server 2016 for on-premise deployments. As those options become available, there will be plenty of guidance explaining how to setup and manage your cluster.
UPDATE: 2016-03-31
Standalone installation on-prem or in another cloud is now available in public preview for Windows Server 2012 R2 and up.