Custome developer environment variables for local service fabric clusters? - azure-service-fabric

When debugging service fabric applications locally it would be nice if developers could have some custom private configuration settings. The application parameters and publish profiles allow per environment configuration but not per developer configuration (unless I've missed something).
I need a way to have service fabric applications running in local clusters to get local configuration values. So each engineer can have their applications running locally point to a private database, IoT Hub, external web service instance and other private resources. Environment variables would work as overrides but the local cluster doesn't pick them up from the host machine.
Is there some way to provide this type of local configuration?

Environment variables are mapped to something else with SF dev cluster, just like %temp% folder is. Since that would be a static info, perhaps it should come from static configuration that is not checked into repository.
There's another question with an answer you were looking for, but it still requires to update SF manifest file.
Update: environment variables scoped to User won't be available. System scoped environment variables will be accessible. VS needs to be restarted to see values of system scoped env variables if those are updated.

Related

Temporarily disabling default services in servicefabric using powershell

The concrete question
For those who just want the direct questions:
Is there a way to temporarily disable default services on a ServiceFabric application type so that a new application can be installed (using Powershell) without automatically installing any default services?
A proposed solution here is to remove the default services from the manifest and later restoring them. I am able to write a PowerShell script to adjust the application manifest accordingly, but how do I update the application type using Powershell - assuming I already have altered the manifest?
Any solution that solves the contextual problem without requiring manual config meddling is acceptable - my proposed solution is probably not the only possible solution. We do explicitly want to avoid manual meddling.
When allowing meddling, we are already able to just comment out the default services when we need to. We're specifically looking for a solution that requires no meddling as this reduces bugs and debugging issues.
The context
I'm running into an issue with using the application manifest's default services during local development.
I am aware of the general "don't use default services" advice, and it is being followed. During CI build, the default services are removed and will not be relied upon for any of our clusters in Azure. The only exception here is local developer machines, which use default services to keep the developer F5 experience nicer by enabling all services when starting a debug session.
We have written specialized scripts that provision a new tenant (SF application) with their own set of services (SF service). Not every tenant should get every service, we want to opt-in to the services, which is what the script already does (based on a mapping that we manage elsewhere, which is not part of the current question as the provisioning script exists and works).
However, when default services are enabled, every tenant already gets every service and the actual opt-in provisioning is useless. This is the issue we're trying to fix.
This same script works in our production cluster since there are no default services configured there. The question is solely focus on the local development environment.
Essentially, we're dealing with two scenarios during local development:
When debugging, we want the default services to be on because it allows us to run all of our services by pressing F5 (and not requiring any further action)
When testing our provisioning script, we don't want default services because it gets in the way of our selective provisioning behavior
I'm aware that commenting the default services out of the manifest solves the issue, but this requires developers constantly toggling the content of the manifest and reinstalling the application type, which we'd like to avoid.
Ideally, we want to have the default services in the manifest (as is currently the case) but then have the provisioning script "disable" the default services for its own runtime (and restore the default services before exiting), as this gets us the desired behavior in both cases.
What is the solution that requires the least manual developer meddling to get the desired behavior in both scenarios?
I'm currently trying to implement it so that the provisioning script:
Copies the application manifest to a backup location
Removes the default services from the real manifest
Updates the application type using the new manifest (i.e. without default services)
Runs the provisioning logic
Restores the real manifest using the backup manifest from step 1
Updates the application type using the restored manifest (i.e. with default services)
It is specifically steps 3 and 6 that I do not know how to implement.
Consider having two sfproj projects in the solution. One with default services, one without.
Also look into using a start-service.ps1 script instead of default services. This way the two projects can use the same application manifest.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-debugging-your-application#running-a-script-as-part-of-debugging

Making Registry changes during startup of stateless service in Service Fabric

I am using a library which searches in registry for a dll. That dll can be installed by running MSI in the Service Fabric cluster and this path will be set.
But I wanted to avoid the installation of MSI in the cluster, and provided the required dlls in the package itself. During start up of the service, I am creating the registry entry and giving the location of the dll in my package. Everything is working as expected.
Is this approach ideal? Are we allowed to make changes to registry? If not, how do we solve this problem? Any pointers are appreciated.
If the library has to use the registry, there is nothing you can do about it other than register the values. If you could change the DLL to retrieve this information from the configuration file would be the ideal solution.
You can do it in SF, the right way to do it is using the SetupEntryPoint option of the ServiceManifest to do these management tasks, and from the Application manifest you can set the policies to specify which user you should run these policies. it is described here with more details
The main issue you have on SF with this approach is that you application might move around the cluster and you have to register it on every node, and maybe also remove it when the application is not running there anymore to avoid garbage in the registry.

Overriding custom service fabric application parameters

We have now come to the point in our Service Fabric application development where we need to add a custom parameter that can be overridden at run time (as described in https://azure.microsoft.com/en-us/documentation/articles/service-fabric-manage-multiple-environment-app-configuration/). We're still in the development stage...no azure production environment, yet, so this question mainly concerns running the service fabric cluster from visual studio or thru a powershell script in a VM. I know that Deploy-FabricApplication.ps1 is ran during debug and per its usage instructions in that file I can override custom parameters. However, I can't seem to figure out where I do that in Visual Studio so that when different developers start a debug session they can set the custom parameter value to whatever makes sense in their dev environment. Any ideas? We have a task to research how to better handle secrets storage but we're not quite there, yet.
You can add multiple publish profiles (optionally without checking them in). One for every developer if needed.
For secrets: you can encrypt settings and/or use Azure Key Vault combined with a Service Principal, similar to what is shown here.

What happens to a CF manifest after a successful deployment?

I am currently tinkering with Cloud Foundry. I understand the basic principles of the tool but can't find what cf push actually does to a manifest file.
Does it read the file just once or is it stored as a static file with the application?
Also, is it possible to retrieve a manifest from a deployed app?
The cf push command reads the manifest file and uses the attribute values (instances, memory, disk etc) for the current deployment. The manifest helps to automate the app deployment. It can also be used for deploying multiple applications at once. As stated here: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html, when you deploy an application for the first time, Cloud Foundry reads the variables described in the environment block of the manifest, and adds them to the environment of the container where the application is deployed. When your app is running, your environment variables can change depending on your setting. For example, if you have an auto-scaler, it could have increased/decreased your no. of instances/memory/disk (environment variables). If that is the case, when you stop and then restart an application, its environment variables persist.
The manifest file is read only when the "cf push" command is executed. As stated in the Cloud Foundry Documentation (https://docs.cloudfoundry.org/devguide/deploy-apps/prepare-to-deploy.html#exclude), the manifest file is just read and not actually stored as a file, hence it could not be accessed for a deployed app. However, if the purpose for accessing your manifest is to read your current environment setting, it can be accessed through the Cloud Foundry API's Get App Summary (or) Get detailed stats for a STARTED App: https://apidocs.cloudfoundry.org/234/

Will copying dlls to Approot in VM work on VM restart?

We have WCF services deployed in azure cloud and runnig. We have some changes in some dlls and want to update in VM but dont want to go through regular deployment/redeployment process.
We are thinking of manually coping dlls to approot and siteroot folders. Will it work?
Will it pick up new dlls when VM restart anytime in future?
To answer your questions
Will manually copying dlls to approot and sitesroot folders work: Yes (make sure you do this on each instance if you have multiple instances running)
Will these dlls survive a reboot: Yes (see Reboot Role Instance: ... Any data that is written to the local disk is persisted across reboots. ...)
But I would suggest to only do this if you're planning to test some things while developing your service.
Do NOT plan to use this for production deployments, because if something goes wrong with your instance, the Fabric Controller might decide to destroy that instance and deploy a new one (same could apply for Windows Updates). This new instance would go back to the initial state of your deployment (the content of the cspkg you deployed).
To make your development deployments even easier you could also activate WebDeploy on your Web Role to deploy from Visual Studio: Enabling Web Deploy for Windows Azure Web Roles with Visual Studio (again, do not use this for real deployments, this is only for when you're testing out some things).
Note: Web Deploy will not work with multiple instances.
No,
And this is not the way to go. If you want to be more dynamic, you have to take the approach of Windows Azure Accelerator for WebRoles. Although not anymore supported and developed project, it will give you a good foundation of dynamically loading assemblies (in this case entire sites) from Blob storage.