In UrbanCode Deploy, how do I cause an application process to fail if not all component versions were specified? - ucd

Currently, when I run an application process that installs various components, if I don't specify a version for any of them, the deploy component process doesn't run, and it says "No Version Selected". However, the step doesn't fail, and the process continues. Is there a way to configure the process to fail if not all components have a version? Or is there a way for me to interrogate the manifest for the process in a step at the top to figure it out myself and fail accordingly? I currently can find no way to do either of these things. The version of UCD I am using is 6.1.1.3.

If your component process is configured as "Process Type* Operational (With Version)" then if you don't select the version the job will fail.

Related

Application Packages with VM configuration

I'm trying to use application packages in the way they're described in this
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
But I keep getting an error saying
application path not found.
Any ideas what could be wrong? Or
How do application packages work in the background, that might help me debug the error?
EDIT: I am trying to add an application package specific to my job manager task. I added the package as a zip file through Azure portal under the name JobManagerTask and version 1.0. Here is the code I'm using to reference it:
string taskID = "tasktest1";
// Obtain application package that has executables for job manager task
ApplicationPackageReference jobManagerApp = new ApplicationPackageReference { ApplicationId = "JobManagerTask", Version = "1.0" };
// Command Line
string commandLine = #"cmd /c %AZ_BATCH_APP_PACKAGE_JOBMANAGERTASK#1.0%\\JobManagerTask.exe";
// Create a CloudTask
CloudTask oneTask = new CloudTask(taskID, commandLine);
oneTask.ApplicationPackageReferences = new List<ApplicationPackageReference> { jobManagerApp };
// Provide elevated admin access to the task
oneTask.UserIdentity = new UserIdentity(new AutoUserSpecification(elevationLevel: ElevationLevel.Admin, scope: AutoUserScope.Task));
// Could add task resource files if needed here
await batchClient.JobOperations.AddTaskAsync(jobID, oneTask);
Coool, so i created a small barebone app. :) rest details are below and please feel free to ping me if I can helpout further.
so I tried with almost identical code like your's minus couple of flags like userIdentity and seems like I had my sample working fine, I think the error will only happen in case where the application package is not correctly refer'd. like if my *.exe reside in some diff dir structure etc. :)
I thought it will be a good idea to create a vanilla (i.e. from scratch application for you by taking one of the existing samples.) which might give you a chance to quickly take a look and see if you missed anything.
Please feel free to ping me and I will help you out to achieve your coal, I think its something very small like path is wrong etc. (which error message also suggest)
The Application reside here:
https://github.com/Tatsinnit/quick_sample_batchapppkgworking
Detail:
Detail is also there in the Readme for the git but as its a good practice in SO to detail everything here I will copy paste what I have written in readme here for you.
quick_sample_batchapppkgworking
Readme: barebone quick app:
Please note thta this app is nothing but a quick sample made based on the existing sample for DotNetTutorial.
Following code is generated just as a sample code for end to end app package working feature.
• https://learn.microsoft.com/en-us/azure/batch/
• https://learn.microsoft.com/en-us/azure/batch/batch-technical-overview
App Pacakges:
• https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
• https://azure.microsoft.com/en-us/blog/application-packages-and-task-dependencies-now-available-on-azure-batch/
The overview as how it works is fairly simple, when user uploads to adds an application package the package becomes available within node’s working directory (wd). The env var gets created to handle multiple updated versions of the app: (the timestamp is automatically part of the App pkg populated env var you dont need to do anything to handle this.)
set AZ_BATCH_APP_PACKAGE_TEST1#1.0=C:\user\tasks\applications\wd\test1\1.0\2017-07-14T21.45.45.765Z
Hence if user has correct package version all set and node has app pkg they can invoke that from whatever the need is for application package: (something like this)
string taskCommandLine = String.Format("cmd /c %AZ_BATCH_APP_PACKAGE_TEST1#1.0%\\ImageTest\\TaskApplication.exe");
The inside implementation is fairly neat as well.
Please note the reason:
%AZ_BATCH_APP_PACKAGE_TEST1#1.0%\\ImageTest\\TaskApplication.exe"
Is because my application package zip contains the TaskApplciaiton.exe under the following structure:
==> ==>
To add further: An application package is a .zip file that contains the application binaries and supporting files that are required for your tasks to run the application. Each application package represents a specific version of the application.+
You can specify application packages at the pool and task levels. You can specify one or more of these packages and (optionally) a version when you create a pool or task.+
• Pool application packages are deployed to every node in the pool. Applications are deployed when a node joins a pool, and when it is rebooted or reimaged.
Pool application packages are appropriate when all nodes in a pool execute a job's tasks. You can specify one or more application packages when you create a pool, and you can add or update an existing pool's packages. If you update an existing pool's application packages, you must restart its nodes to install the new package.
• Task application packages are deployed only to a compute node scheduled to run a task, just before running the task's command line. If the specified application package and version is already on the node, it is not redeployed and the existing package is used.
Task application packages are useful in shared-pool environments, where different jobs are run on one pool, and the pool is not deleted when a job is completed. If your job has fewer tasks than nodes in the pool, task application packages can minimize data transfer since your application is deployed only to the nodes that run tasks.
The sample attached contains both pool level level as well the task level demo.
Steps:
At first add a new Application Package into my batch account: you can do that via portal. (the git project has test1.zip along with this git sample console app.
Then open your DotNetTurorial solution:
Fill in these info for the batch account credentials or any storage account in use for your credentials correctly:
Hit start the barebone.cs is set as the start project, ** please note you might need to change your *.proj file, because in my local all nugets were getting sourced from c:\cxcache
Please also note, there will be prompt to delete the job and pool, if you want to checkout the return result of this app, please keep the job and pool and then go inside node inside that pool and checkout stdout.txt file for the txt printed. (Note: you probably want to delete job and pool from the portal once you are done.)
The screenshots from my successful run are below:
So I was able to see the Test Success getting printed in my stdout.txt inside node from the TaskApplication.exe which was part of this application package.
The code used in this sample barebone app is reused fomr the sample existing here:
https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/DotNetTutorial.
Other friendly screenshots:

Pages taking too long to load after maven build

I am using following command to deploy code to my AEM instance "mvn clean install -Daem.host=localhost -Daem.port=1202 -Dmaven.test.skip=true
"
After deployment pages are taking too long to load at least 7 mins.
I found No errors/Exceptions in error log.
There could be couple of factors causing this slowness -
Amount of memory allocated to AEM instance, default setting is - CQ_JVM_OPTS='-server -Xmx1024m -XX:MaxPermSize=256M -Djava.awt.headless=true' which is actually not sufficient for optimal performance. I have been using double of this configurations and sometimes even more.
When you deploy your package with code, the bundles are processed and services are registered. Depending on number of services/components being registered the time can go up. Sometimes there are hooks within code that cause few system level bundles to cycle as well, if that happens it would actually cause all the other bundles dependent on system bundle to cycle and registering the services again.
your code deployment could be triggering some workflow that either consumes lot of resources or is causing delayed activation on your bundle. The first scenario could happen if your deployment has something like images which when deployed causes OOTB image workflow to trigger (there could be other based on your code). Second scenario could be that you have bundle activator either waiting for another bundle which gets deployed later (and/or stays installed and not active) or you are building some sort of caching that waits for pages to be deployed and processed. There are countless such scenarios that can cause this issue.
What you could do is check the status of the bundles in /system/console/bundles pre and post deployment you can identify bundle related issues there. Another thing you could try is to do selective deployment of the code to figure out what module is causing issue that then dive deeper in to that module.
Also look at recent request logs to identify the flow of page load to see if there are services, filters etc in picture that are causing delays.
Let me know if any of this approach helps you identify the root cause and in case you need further help, will be here to assist.

Required variables at queue time

When running our Release build (which ultimately labels and versions a changeset), I want the variables to be supplied at queueing time. For example 1.0.23 below:
Is there any way to set these variables as required in order to execute the build?
This new "vNext" build platform is incredibly difficult to Google for.
The best I have come up with thus far is to add a task as the first step in the first phase of the build that checks the required variables are set. If any are not, it fails the build.
I use PowerShell for this:
if ([string]::IsNullOrWhitespace($env:Major)) { throw "Major not set" }
This is not ideal, as the build still has to wait to get scheduled on an agent, sync sources, &c. before the validation code runs and fails the build. But, it's still better than building everything just to have, say, packaging (step 14/15) fail because the version wasn't set.
I've opened a feature request on the VSTS UserVoice page asking for "required queue variables".

How do I disable automatic updates for Azure VM extensions?

We have a few VMs in Azure and we rely on the PowerShell DSC extension to deploy our code to the machines. I want to make sure that this extension is not updated automatically so that our code that uses functionality from this extension don't break without we knowing about it first.
The problem is that we have some deployment scripts that read the extension's status codes/messages and do custom logic based on them. When the extension was updated from 1.4.0.0 (which is the version that the plugin was on when we first started using it) to the version 1.5.0.0, some of the status messages changed and our script stopped working. This completely broke our deployment process and we had to do an emergency update on our scripts to be compatible with v1.5. Now that version 1.7.0.0 was released the same exact problem happened again. Some new status codes were added and I had to update our scripts or we would not have a working deployment pipeline.
Is it possible to specify a manual update process for these extensions? Their installation and update seem to be completely automated. Ideally, I'd like to be able to update them on a case by case basis after testing our scripts against the newer versions first, so that our deployment process is not halted because of that. Bonus points for anyone who manages to find up to date documentation or some kind of release notes document for this extension in particular, as I could find none... I was just surprised to see that version 1.7 was installed today when I got an error from our script, and was lucky to know exactly where to look for the status changes.
The default behavior for the DSC extension handler is to update to the latest version. If you want to tie yourself down to a specific version, then you can do so with the following cmdlet (currently there is no provision from the UI)
Set-AzureVMDscExtension -Version
Please note that we are also try to ensure updates do not cause issues. We are not there yet but we would certainly like to get there so everyone is automatically updated.

In TeamCity, can I run a command-line application for the duration of a build?

I have a command-line application that I want to run in a build configuration for the duration of the build, then shut it down at the end when all other build steps have completed.
The application is best thought of as a stub server, which will have a client run against it, then report its results. After the tests, I shut down the server. That's the theory anyway.
What I'm finding is that running my stub server as a command line build step shuts down the stub server immediately before going to the next build step. Since the next build step depends on the server running, the whole thing fails.
I've also tried using the custom script option to run both tools one after another in the same step, but that results in the same thing: the server, launched on the first line, is shut down before invoking the second line of the script.
Is it possible to do what I'm asking in TeamCity? If so, how do I do it? Please list any possibilities, right up to creating a plugin (although the easier, the better).
Yes you can, you can do that in a Nant script, have Teamcity run the script, look for spawn and the nantContrib waitforexit.
However I think you would be much better off creating a mock class that the client uses only when running the tests. Instead of round tripping to the server during the build as that is can be a bit problematic, sometimes ports are closed, sometimes the server hangs from the last run, etc. That way you can run the tests, make sure the code is doing the right thing when the mock returns whatever it needs to return etc.