Making Registry changes during startup of stateless service in Service Fabric - azure-service-fabric

I am using a library which searches in registry for a dll. That dll can be installed by running MSI in the Service Fabric cluster and this path will be set.
But I wanted to avoid the installation of MSI in the cluster, and provided the required dlls in the package itself. During start up of the service, I am creating the registry entry and giving the location of the dll in my package. Everything is working as expected.
Is this approach ideal? Are we allowed to make changes to registry? If not, how do we solve this problem? Any pointers are appreciated.

If the library has to use the registry, there is nothing you can do about it other than register the values. If you could change the DLL to retrieve this information from the configuration file would be the ideal solution.
You can do it in SF, the right way to do it is using the SetupEntryPoint option of the ServiceManifest to do these management tasks, and from the Application manifest you can set the policies to specify which user you should run these policies. it is described here with more details
The main issue you have on SF with this approach is that you application might move around the cluster and you have to register it on every node, and maybe also remove it when the application is not running there anymore to avoid garbage in the registry.

Related

Temporarily disabling default services in servicefabric using powershell

The concrete question
For those who just want the direct questions:
Is there a way to temporarily disable default services on a ServiceFabric application type so that a new application can be installed (using Powershell) without automatically installing any default services?
A proposed solution here is to remove the default services from the manifest and later restoring them. I am able to write a PowerShell script to adjust the application manifest accordingly, but how do I update the application type using Powershell - assuming I already have altered the manifest?
Any solution that solves the contextual problem without requiring manual config meddling is acceptable - my proposed solution is probably not the only possible solution. We do explicitly want to avoid manual meddling.
When allowing meddling, we are already able to just comment out the default services when we need to. We're specifically looking for a solution that requires no meddling as this reduces bugs and debugging issues.
The context
I'm running into an issue with using the application manifest's default services during local development.
I am aware of the general "don't use default services" advice, and it is being followed. During CI build, the default services are removed and will not be relied upon for any of our clusters in Azure. The only exception here is local developer machines, which use default services to keep the developer F5 experience nicer by enabling all services when starting a debug session.
We have written specialized scripts that provision a new tenant (SF application) with their own set of services (SF service). Not every tenant should get every service, we want to opt-in to the services, which is what the script already does (based on a mapping that we manage elsewhere, which is not part of the current question as the provisioning script exists and works).
However, when default services are enabled, every tenant already gets every service and the actual opt-in provisioning is useless. This is the issue we're trying to fix.
This same script works in our production cluster since there are no default services configured there. The question is solely focus on the local development environment.
Essentially, we're dealing with two scenarios during local development:
When debugging, we want the default services to be on because it allows us to run all of our services by pressing F5 (and not requiring any further action)
When testing our provisioning script, we don't want default services because it gets in the way of our selective provisioning behavior
I'm aware that commenting the default services out of the manifest solves the issue, but this requires developers constantly toggling the content of the manifest and reinstalling the application type, which we'd like to avoid.
Ideally, we want to have the default services in the manifest (as is currently the case) but then have the provisioning script "disable" the default services for its own runtime (and restore the default services before exiting), as this gets us the desired behavior in both cases.
What is the solution that requires the least manual developer meddling to get the desired behavior in both scenarios?
I'm currently trying to implement it so that the provisioning script:
Copies the application manifest to a backup location
Removes the default services from the real manifest
Updates the application type using the new manifest (i.e. without default services)
Runs the provisioning logic
Restores the real manifest using the backup manifest from step 1
Updates the application type using the restored manifest (i.e. with default services)
It is specifically steps 3 and 6 that I do not know how to implement.
Consider having two sfproj projects in the solution. One with default services, one without.
Also look into using a start-service.ps1 script instead of default services. This way the two projects can use the same application manifest.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-debugging-your-application#running-a-script-as-part-of-debugging

Azure Service Fabric - Deleting old versions of applications to reclaim disk space

We are running out of space on our D:\Temporary Storage drive on our Service Fabric Cluster VM's (5 nodes). I have conversed back and forth with MS support about what is safe to delete from this drive and the answers I'm getting are ambiguous at best.
I've noted that we have many older versions of our applications and services on the VM's that we don't need anymore. Getting rid of these will definitely help free up space. I've asked MS support if it's safe to delete the old versions of the applications and they said yes, but then directed me to these links:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-remove-applications#remove-an-application-package-from-the-image-store
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-remove-applications#remove-an-application
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-remove-applications#unregister-an-application-type
So the three sections we have are:
Remove an application package from the image store
Remove an application
Unregister an application type
These all deal with PowerShell scripts that need to be run, which I am very novice with. I have direct RDP access to the VM's and have the ability to simply delete the files via Windows File Explorer. Is it ok to do it this way, or do I need to go the Powershell route for deletion and Unregistering the application? At least for #1, removing the application package from the image store, there shouldn't be any issue with me just deleting that from Windows File Explorer in the VM's, correct?
EDIT: this is not a duplicate of Run out of storage on Service Fabric scale set :
I am asking about manually clearing space on the SFC VM's - the above thread is talking about setting up your application deployment to auto-delete old versions of applications. These are not duplicates.
You shouldn't delete manually from within the VM, SF should handle it and you may cause issues.
The right way to remove them is doing like the documentation says, using powershell like:.
Remove-ServiceFabricApplicationPackage -ApplicationPackagePathInImageStore MyApplicationV1
You can also remove it manually via Service Fabric Explorer:
The option in the left will try to delete all the Application Package Versions registered in the cluster(if none in use)
The one in the right will delete a specific version (if not in use)
Keep in mind that to remove a package you should remove any running application that is using the same package version.
The other option is deleting the old version when you deploy a new one. I will link you to this other SO question: Run out of storage on Service Fabric scale set

Overriding custom service fabric application parameters

We have now come to the point in our Service Fabric application development where we need to add a custom parameter that can be overridden at run time (as described in https://azure.microsoft.com/en-us/documentation/articles/service-fabric-manage-multiple-environment-app-configuration/). We're still in the development stage...no azure production environment, yet, so this question mainly concerns running the service fabric cluster from visual studio or thru a powershell script in a VM. I know that Deploy-FabricApplication.ps1 is ran during debug and per its usage instructions in that file I can override custom parameters. However, I can't seem to figure out where I do that in Visual Studio so that when different developers start a debug session they can set the custom parameter value to whatever makes sense in their dev environment. Any ideas? We have a task to research how to better handle secrets storage but we're not quite there, yet.
You can add multiple publish profiles (optionally without checking them in). One for every developer if needed.
For secrets: you can encrypt settings and/or use Azure Key Vault combined with a Service Principal, similar to what is shown here.

Will copying dlls to Approot in VM work on VM restart?

We have WCF services deployed in azure cloud and runnig. We have some changes in some dlls and want to update in VM but dont want to go through regular deployment/redeployment process.
We are thinking of manually coping dlls to approot and siteroot folders. Will it work?
Will it pick up new dlls when VM restart anytime in future?
To answer your questions
Will manually copying dlls to approot and sitesroot folders work: Yes (make sure you do this on each instance if you have multiple instances running)
Will these dlls survive a reboot: Yes (see Reboot Role Instance: ... Any data that is written to the local disk is persisted across reboots. ...)
But I would suggest to only do this if you're planning to test some things while developing your service.
Do NOT plan to use this for production deployments, because if something goes wrong with your instance, the Fabric Controller might decide to destroy that instance and deploy a new one (same could apply for Windows Updates). This new instance would go back to the initial state of your deployment (the content of the cspkg you deployed).
To make your development deployments even easier you could also activate WebDeploy on your Web Role to deploy from Visual Studio: Enabling Web Deploy for Windows Azure Web Roles with Visual Studio (again, do not use this for real deployments, this is only for when you're testing out some things).
Note: Web Deploy will not work with multiple instances.
No,
And this is not the way to go. If you want to be more dynamic, you have to take the approach of Windows Azure Accelerator for WebRoles. Although not anymore supported and developed project, it will give you a good foundation of dynamically loading assemblies (in this case entire sites) from Blob storage.

Amazon EC2 Auto Scaling in production

I have realized that I have to make Image from EBS Volume everytime when I change my code
and following autoscaling configuration everytime (this is really bad).
I have heard that some people try to load their newest code from github or some similar sort of doing.
So that they can let server to have newest code automatically without making new image every single time.
I already have a private github.
Is it a only way to solve Auto-Scaling code management ?
If so, how can I configure this to work?
Use user-data scripts, which work on a lot of public images including Amazon's. You could have it download puppet manifests/templates/files and run directly. Search for master less puppet.
Yes, you can configure your AMI so that the instance loads the latest software and configuration on first boot before it is put into service in the auto scaling group.
How to set up a startup script may depend on the specific os and version you are running.