Temporarily disabling default services in servicefabric using powershell - powershell

The concrete question
For those who just want the direct questions:
Is there a way to temporarily disable default services on a ServiceFabric application type so that a new application can be installed (using Powershell) without automatically installing any default services?
A proposed solution here is to remove the default services from the manifest and later restoring them. I am able to write a PowerShell script to adjust the application manifest accordingly, but how do I update the application type using Powershell - assuming I already have altered the manifest?
Any solution that solves the contextual problem without requiring manual config meddling is acceptable - my proposed solution is probably not the only possible solution. We do explicitly want to avoid manual meddling.
When allowing meddling, we are already able to just comment out the default services when we need to. We're specifically looking for a solution that requires no meddling as this reduces bugs and debugging issues.
The context
I'm running into an issue with using the application manifest's default services during local development.
I am aware of the general "don't use default services" advice, and it is being followed. During CI build, the default services are removed and will not be relied upon for any of our clusters in Azure. The only exception here is local developer machines, which use default services to keep the developer F5 experience nicer by enabling all services when starting a debug session.
We have written specialized scripts that provision a new tenant (SF application) with their own set of services (SF service). Not every tenant should get every service, we want to opt-in to the services, which is what the script already does (based on a mapping that we manage elsewhere, which is not part of the current question as the provisioning script exists and works).
However, when default services are enabled, every tenant already gets every service and the actual opt-in provisioning is useless. This is the issue we're trying to fix.
This same script works in our production cluster since there are no default services configured there. The question is solely focus on the local development environment.
Essentially, we're dealing with two scenarios during local development:
When debugging, we want the default services to be on because it allows us to run all of our services by pressing F5 (and not requiring any further action)
When testing our provisioning script, we don't want default services because it gets in the way of our selective provisioning behavior
I'm aware that commenting the default services out of the manifest solves the issue, but this requires developers constantly toggling the content of the manifest and reinstalling the application type, which we'd like to avoid.
Ideally, we want to have the default services in the manifest (as is currently the case) but then have the provisioning script "disable" the default services for its own runtime (and restore the default services before exiting), as this gets us the desired behavior in both cases.
What is the solution that requires the least manual developer meddling to get the desired behavior in both scenarios?
I'm currently trying to implement it so that the provisioning script:
Copies the application manifest to a backup location
Removes the default services from the real manifest
Updates the application type using the new manifest (i.e. without default services)
Runs the provisioning logic
Restores the real manifest using the backup manifest from step 1
Updates the application type using the restored manifest (i.e. with default services)
It is specifically steps 3 and 6 that I do not know how to implement.

Consider having two sfproj projects in the solution. One with default services, one without.
Also look into using a start-service.ps1 script instead of default services. This way the two projects can use the same application manifest.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-debugging-your-application#running-a-script-as-part-of-debugging

Related

Making Registry changes during startup of stateless service in Service Fabric

I am using a library which searches in registry for a dll. That dll can be installed by running MSI in the Service Fabric cluster and this path will be set.
But I wanted to avoid the installation of MSI in the cluster, and provided the required dlls in the package itself. During start up of the service, I am creating the registry entry and giving the location of the dll in my package. Everything is working as expected.
Is this approach ideal? Are we allowed to make changes to registry? If not, how do we solve this problem? Any pointers are appreciated.
If the library has to use the registry, there is nothing you can do about it other than register the values. If you could change the DLL to retrieve this information from the configuration file would be the ideal solution.
You can do it in SF, the right way to do it is using the SetupEntryPoint option of the ServiceManifest to do these management tasks, and from the Application manifest you can set the policies to specify which user you should run these policies. it is described here with more details
The main issue you have on SF with this approach is that you application might move around the cluster and you have to register it on every node, and maybe also remove it when the application is not running there anymore to avoid garbage in the registry.

Overriding custom service fabric application parameters

We have now come to the point in our Service Fabric application development where we need to add a custom parameter that can be overridden at run time (as described in https://azure.microsoft.com/en-us/documentation/articles/service-fabric-manage-multiple-environment-app-configuration/). We're still in the development stage...no azure production environment, yet, so this question mainly concerns running the service fabric cluster from visual studio or thru a powershell script in a VM. I know that Deploy-FabricApplication.ps1 is ran during debug and per its usage instructions in that file I can override custom parameters. However, I can't seem to figure out where I do that in Visual Studio so that when different developers start a debug session they can set the custom parameter value to whatever makes sense in their dev environment. Any ideas? We have a task to research how to better handle secrets storage but we're not quite there, yet.
You can add multiple publish profiles (optionally without checking them in). One for every developer if needed.
For secrets: you can encrypt settings and/or use Azure Key Vault combined with a Service Principal, similar to what is shown here.

Updating Deployments SCCM

I'm super new to SCCM and trying out some stuff.
Atm I create a lot of Applications to deploy on around 50 Clients.
Before I deploy them to all clients I test them on a test Client.
The problem now is that if I change sth in the Deployment Type like the installation command I have to delete the deployment everytime afterwards and deploy it again or the change wont happen on the client when I install the Application next time.
There probaly a way easier method which I can't figure out atm.
So how do i update the changes I made after the Application is allready deployed?
Greetings,
Paxz.
The application deployment command line will only be executed if the application is not detected - i.e. the Application Detection criteria evaluates to false. With this premise, it is possible to change the Application Detection criteria so it evaluates to false... perhaps add an addition rule to include "file1.txt exists"? This should work, but it is ugly and I would not recommend it.
A better approach
I prefer to test my application deployments on VMs in the first instance: prepare the destination machine, snapshot it, then deploy.
If you need to tweak your deployment you can then make the required changes, redistribute the content (if required), then restore the VM's snapshot for a fresh deployment.
I managed to get an answer from microsofts technet forum.
For deployments to know the update in the command line, I just have to push the next policy polling cycle.
This will only be effective for clients that haven't executed the deployment type yet.
Other than that there seems to be no other way than deleting the deployment and re-deploying it for the changes to be known for the deployment.

Automated deployment of web site

I'm planning to do an automated deployment of a website,but im kind of stuck at this moment. I have looked at MS-Deploy, it got all the functions for deploying Website. I have a created a Web application package (.ZIP file) and I tested this on my local machine it is deploying website i.e
Create Web application under default website
Publishing files in c:\inetpub\wwwroot directory
Set ACLs on directories,etc
But i want to achieve few more extra steps for example:
Check whether Web application exists in Default Website, if not
create a Web application
Check whether Application pool exists, if not create a App pool
(given name) with a specific credentials and Assign App-pool to Web
application
Before it deploys take a backup copy of existing Web application (IF
exists)
publish offline page (app_offline.htm)
publishing the files to application directory
Replace the AppSettings section(in web.config file) to with actual values
Encrypt Web.config connection string
If there is any error whilst installing web application, rollback the web application to its previous version
The question is whether can i achieve all these functions via MS-Deploy or do i need to write any script, please suggest me what scripting language should i use
Please let me know if you need more information.
Thanks in advance
I'm not an expert on this topic but have been doing a bit of research on automated deployment with MSDeploy lately, and think I can offer the following;
This is default behaviour if you use the iisApp provider.
I know you can do this with the appPoolConfig provider, but I'm unsure as to how you would run this and #1 together as part of the same package. Perhaps as part of a pre- or post-sync command?
This is standard in v3, as long as it's set up on the server. Not used it myself, but read this anyway.
Fiddly. Not supported in MSDeploy, but you can vote for it if you want. Also, check out this SO answer (and also worth checking out PackageWeb, but the same answers' author).
Not sure I follow. This is done as part of a successful deployment, surely?
Use web.config transforms and optionally the aforementioned PackageWeb for a neat way to do this. Also check out Web Publish Profiles.
Difficult. My understanding is that the encryption is based on the machine.config, so you'd either have to run a post-sync script which would run some sort of remote Powershell script on the remote server to encrypt the web.config using aspnet_regiis, or you'd have to encrypt the config as part of your build process and then muck about with custom keys and the RSA provider (some info here).
I hope that helps. As I said, I'm no expert, so happy to be corrected by those more knowledgeable. Maybe also worth mentioning that MSDeploy is a lot more powerful if you use it via the command-line rather than creating packages from VS, although there is a bit of a learning curve to go with it.

Will copying dlls to Approot in VM work on VM restart?

We have WCF services deployed in azure cloud and runnig. We have some changes in some dlls and want to update in VM but dont want to go through regular deployment/redeployment process.
We are thinking of manually coping dlls to approot and siteroot folders. Will it work?
Will it pick up new dlls when VM restart anytime in future?
To answer your questions
Will manually copying dlls to approot and sitesroot folders work: Yes (make sure you do this on each instance if you have multiple instances running)
Will these dlls survive a reboot: Yes (see Reboot Role Instance: ... Any data that is written to the local disk is persisted across reboots. ...)
But I would suggest to only do this if you're planning to test some things while developing your service.
Do NOT plan to use this for production deployments, because if something goes wrong with your instance, the Fabric Controller might decide to destroy that instance and deploy a new one (same could apply for Windows Updates). This new instance would go back to the initial state of your deployment (the content of the cspkg you deployed).
To make your development deployments even easier you could also activate WebDeploy on your Web Role to deploy from Visual Studio: Enabling Web Deploy for Windows Azure Web Roles with Visual Studio (again, do not use this for real deployments, this is only for when you're testing out some things).
Note: Web Deploy will not work with multiple instances.
No,
And this is not the way to go. If you want to be more dynamic, you have to take the approach of Windows Azure Accelerator for WebRoles. Although not anymore supported and developed project, it will give you a good foundation of dynamically loading assemblies (in this case entire sites) from Blob storage.