Updating Deployments SCCM - deployment

I'm super new to SCCM and trying out some stuff.
Atm I create a lot of Applications to deploy on around 50 Clients.
Before I deploy them to all clients I test them on a test Client.
The problem now is that if I change sth in the Deployment Type like the installation command I have to delete the deployment everytime afterwards and deploy it again or the change wont happen on the client when I install the Application next time.
There probaly a way easier method which I can't figure out atm.
So how do i update the changes I made after the Application is allready deployed?
Greetings,
Paxz.

The application deployment command line will only be executed if the application is not detected - i.e. the Application Detection criteria evaluates to false. With this premise, it is possible to change the Application Detection criteria so it evaluates to false... perhaps add an addition rule to include "file1.txt exists"? This should work, but it is ugly and I would not recommend it.
A better approach
I prefer to test my application deployments on VMs in the first instance: prepare the destination machine, snapshot it, then deploy.
If you need to tweak your deployment you can then make the required changes, redistribute the content (if required), then restore the VM's snapshot for a fresh deployment.

I managed to get an answer from microsofts technet forum.
For deployments to know the update in the command line, I just have to push the next policy polling cycle.
This will only be effective for clients that haven't executed the deployment type yet.
Other than that there seems to be no other way than deleting the deployment and re-deploying it for the changes to be known for the deployment.

Related

How to rollback code deploy if app fails to provision

I have a service in ecs that is being deployed via code deploy (blue/green) I have configured it's rollback parameters to roll back "when a deployment fails". The issue I'm having is that it will attempt to deploy the app, the app will fail to deploy because something was miss configured in the new task definition (in taskdef.json), and continue trying to re-deploy it instead of just rolling back.
This doesn't seem right and the only other thing I can think to do is create an alarm that looks for a failing deployment but that also seems like something that setting the "roll back when a deployment fails" option should do for me. Not to mention creating that alarm doesn't actually seem straight forward either as there would be a few edge cases to it.

Temporarily disabling default services in servicefabric using powershell

The concrete question
For those who just want the direct questions:
Is there a way to temporarily disable default services on a ServiceFabric application type so that a new application can be installed (using Powershell) without automatically installing any default services?
A proposed solution here is to remove the default services from the manifest and later restoring them. I am able to write a PowerShell script to adjust the application manifest accordingly, but how do I update the application type using Powershell - assuming I already have altered the manifest?
Any solution that solves the contextual problem without requiring manual config meddling is acceptable - my proposed solution is probably not the only possible solution. We do explicitly want to avoid manual meddling.
When allowing meddling, we are already able to just comment out the default services when we need to. We're specifically looking for a solution that requires no meddling as this reduces bugs and debugging issues.
The context
I'm running into an issue with using the application manifest's default services during local development.
I am aware of the general "don't use default services" advice, and it is being followed. During CI build, the default services are removed and will not be relied upon for any of our clusters in Azure. The only exception here is local developer machines, which use default services to keep the developer F5 experience nicer by enabling all services when starting a debug session.
We have written specialized scripts that provision a new tenant (SF application) with their own set of services (SF service). Not every tenant should get every service, we want to opt-in to the services, which is what the script already does (based on a mapping that we manage elsewhere, which is not part of the current question as the provisioning script exists and works).
However, when default services are enabled, every tenant already gets every service and the actual opt-in provisioning is useless. This is the issue we're trying to fix.
This same script works in our production cluster since there are no default services configured there. The question is solely focus on the local development environment.
Essentially, we're dealing with two scenarios during local development:
When debugging, we want the default services to be on because it allows us to run all of our services by pressing F5 (and not requiring any further action)
When testing our provisioning script, we don't want default services because it gets in the way of our selective provisioning behavior
I'm aware that commenting the default services out of the manifest solves the issue, but this requires developers constantly toggling the content of the manifest and reinstalling the application type, which we'd like to avoid.
Ideally, we want to have the default services in the manifest (as is currently the case) but then have the provisioning script "disable" the default services for its own runtime (and restore the default services before exiting), as this gets us the desired behavior in both cases.
What is the solution that requires the least manual developer meddling to get the desired behavior in both scenarios?
I'm currently trying to implement it so that the provisioning script:
Copies the application manifest to a backup location
Removes the default services from the real manifest
Updates the application type using the new manifest (i.e. without default services)
Runs the provisioning logic
Restores the real manifest using the backup manifest from step 1
Updates the application type using the restored manifest (i.e. with default services)
It is specifically steps 3 and 6 that I do not know how to implement.
Consider having two sfproj projects in the solution. One with default services, one without.
Also look into using a start-service.ps1 script instead of default services. This way the two projects can use the same application manifest.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-debugging-your-application#running-a-script-as-part-of-debugging

How do we control application type versions retention?

We want to execute external integration tests and manually call a rollback if something is wrong.
We're using the 'Service Fabric Application Deployment' task in Team Services (VSTS) and it seems to only keep the latest in the cluster.
Cluster --> Applications --> [Application], and then under Essentials. Only one row item is listed which shows the latest version.
Also, attempting Start-ServiceFabricApplicationUpgrade results in 'Application type and version not found.'
How do we alter the behaviour of previous version retention of application types? (And what is the default?)
I don't have the answer to your question, but I do offer this thought:
While I understand there may be a valid use case out there for trying to do this, I think a more accepted approach is to set up a test environment that matches production very closely. Deploy to test and test the heck out of it before approving the deployment to production.
One of the main selling points of Service Fabric is its ability to be so redundant, yet with your proposed workflow you are deploying code to that environment in which you're not entirely confident in. I think that really goes against what Service Fabric offers you.
Since you will be testing it so thoroughly on the Test environment, hopefully anything you end up finding in Production is small enough to be fixed through a patch a few hours later or however fast you can fix it.

Redeploy Service Fabric application without version change

I've read about partial upgrade, but it always requires to change some parts of the packages application. I'd like to know if there's a way to redeploy a package without version change. In a way, similar to what VS is doing when deploying to the dev cluster.
On your local dev cluster, VS simply deletes the application before it starts the re-deployment. You could do the same in your production cluster, however this results in downtime since the application is not accessible during that time.
What's the reason why you wouldn't want to use the regular monitored upgrade? It has many advantages, like automatic rollbacks and so on.

TFS Intranet Automated Deploy Strategy

I have introduced branching/merging to my team and have talked before about how it would be great to automatically build and deploy code checked into the staging/master branches, but I'm a junior dev, not very ops-y.
The trouble I'm having, is that we create intranet applications and store them on our own VM's which we have access to, but we also have load balancing which is causing me grief!
I can get a build to automate (well, I haven't got all the bugs figured out but I'm working my way through them) - and I can even get the build to automatically create a zip file ready for deployment.
Is it possible to configure several servers for deployment?
I.E
1) I check in some code to stage
***Automatically***
2) Code builds
3) Build completes, Unit tests run and they complete
4) Code is packaged into a .zip
5) .Zip is deployed across the three load balancing servers (all with the same file path).
***
Maybe worth noting we currently have our TFS server running Visual Studio so the code is built on the same server it is all stored, but this is not the server we run live code from.
Any help or tutorials specific to my setup would be GREATLY appreciated, I really want to turn this departments releasing strategies around!
I am going to address only the deployment aspect. There are a lot of different ways that this can be handled, such as:
Customizing the build template
Writing custom .Net code and inserting it into the build template (which would also involve customizing the template)
Creating a Batch or Powershell script set to run after the build completes
Using a separate tool such as OctoDeploy or Release Manager to handle the deployments
The first thing you need to do is separate the build and deployment steps in your head. While they are tightly coupled in your model, they are two totally different tasks that need to be handled different ways.
The second thing is to stop thinking like a developer when it comes to the deployment portion. While there will likely be a programmatic solution, you'll need to identify the manual steps first.
You stated that you're not very ops-y, by which I assume you mean you're more Developer and not Systems Analyst. If that is the case, then the third thing you'll need to do is get someone who is involved, such as your current release team.
There are 3 major things that need to be done then:
EVERYTHING needs to be standardized. If you can't standardize something, then standardize the way that it's non-standard (example: You have a bulk list of servers you need to deploy to, and you need to figure out which ones to deploy to based on their name, which can be anything. In that case, a rule needs to be put in place that all QA servers need to have QA in their name, User Acceptance servers need UAT, Production need PROD, etc.).
Figure out how you're going to communicate from the build to the deployment, which builds are going to deployed, to which servers, and where the code is going to be picked up from
You need to document every manual step, and every exception to those steps, and every exception to those exceptions.
Once you have all those pieces in place, you need to then go through each manual step and automate it, whether that's through Batch, Powershell, or a custom-built application. Once you have all the steps automated, you'll have both the build and deploy pieces complete.
After you're able to execute a single "manual" automatic deployment to a single environment, you're then ready to figure out how you want to run it for multiple environments. This can be as complex as an XML file that is iterated through, to simply calling the same command multiple times with different parameters.
A quick summary of how I've done this at my current job (where using a third-party deployment tool was not an option):
Created a tool using .Net WinForms to allow us to "manually" run automated builds (We use the interface to determine the input parameters, and the custom classes under the hood do all the heavy lifting. These custom classes are in a separate project that builds to their own dll. This also allows us to test tweaks and changes to the process in a testing environment before we roll it out to our production build server)
Set up an XML file for each set of environment (QA, UAT, Prod, etc.) that contains all of the servers that need to be deployed to in that environment, including destination paths, scheduled tasks, and Windows Services
Customize the TFS build template and include the custom classes created for the custom tool, which will read the XML file and iterate through each server entry to perform the deployments
I'm more than happy to help with more specific examples and assistance, I look at things a bit different than most people and it helps when it comes to release management.