msdeploy - stop deploy in postsync if presync fails - deployment

I am using msdeploy -presync to backup the current deployment of a website in IIS before the -postsync deploys it, however I recently had a situation where the -presync failed (raised a warning due to a missing dll) and the -postsync continued and overwrote the code.
Both the presync and postsync run batch files.
Obviously this is bad as the backup failed so there is no backout route if the deployment has bugs or fails.
Is there anyway to stop the postsync if the presync raises warnings with msdeploy?
Perhaps the issue here is that the presync failure was raised as a warning not an error.

Supply successReturnCodes parameter set to 0 (success return code convention) to presync option such as:
-preSync:runCommand="your script",successReturnCodes=0
More info at: http://technet.microsoft.com/en-us/library/ee619740(v=ws.10).aspx

Related

Github CI UITest gives flaky tests 'Unable to monitor event loop'

I am running my UI Tests on Github CI and the tests are flaky. I don't understand how I can fix it. The animations are disabled and I am running the tests on a iPhone 13 plus. A lot of tests are running green, but some are not working. Locally, I got everything working.
These are some logs:
2022-06-21T13:42:23.2627250Z t = 63.34s Tap Cell
2022-06-21T13:42:23.2707530Z t = 63.34s Wait for com.project.project to idle
2022-06-21T13:42:23.2733620Z t = 63.41s Unable to monitor event loop
2022-06-21T13:42:23.2734250Z t = 63.41s Unable to monitor animations
2022-06-21T13:42:23.2734800Z t = 63.42s Find the Cell
2022-06-21T13:42:24.1158670Z t = 64.45s Find the Cell (retry 1)
2022-06-21T13:42:24.1287900Z t = 64.45s Collecting extra data to assist test failure triage
2022-06-21T13:42:24.2022460Z /Users/runner/work/project/UITestCase.swift:665: error: -[project.UserTagTest testTapInTextView] : Failed to get matching snapshot: Lost connection to the application (pid 12676). (Underlying Error: Couldn’t communicate with a helper application. Try your operation again. If that fails, quit and relaunch the application and try again. The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003.)
It can not find the cell because of these logs:
Unable to monitor event loop
Unable to monitor animations
I know this because I sometimes get different errors than the error above, which says that the connection to the application is lost, right below the Unable to monitor... error logging.
Is there anything I can try? I don't have a reproduction project. This is the command that is executed:
xcodebuild test -project project.xcodeproj -scheme project-iosUITests -destination 'platform=iOS Simulator,name=iPhone 13 Pro,OS=15.5'
The CI runs 35 tests and 5 fails randomly with the Unable to errors. Is there any suggestion to fix this problem?

Azure Devops - Release Pipeline when re-running failed tests azure devops shows failure status even if re-run succeeded

I use Specflow with SpecRunner+ I am using the Deafult.srprofile to to re-run failed tests 3 times in visual studio it shows 2passed 1 failed but the status of the test is a failure, the same goes for azure devops if a re-ran test passes the outcome of the run is a failure. The Failures are sometimes caused by locator timeouts or server timeouts not often but saw it happen few time thats why we decided to implement a re-run.
Could anyone help on this?
022-02-09T12:40:13.8607507Z Test Run Failed.
2022-02-09T12:40:13.8608607Z Total tests: 37
2022-02-09T12:40:13.8609271Z Passed: 36
2022-02-09T12:40:13.8609858Z Failed: 1
2022-02-09T12:40:13.8617476Z Total time: 7.4559 Minutes
2022-02-09T12:40:13.9226929Z ##[warning]Vstest failed with error. Check logs for failures. There might be failed tests.
2022-02-09T12:40:14.0075402Z ##[error]Error: The process 'D:\Microsoft_Visual_Studio\2019\Common7\IDE\Extensions\TestPlatform\vstest.console.exe' failed with exit code 1
2022-02-09T12:40:14.8164576Z ##[error]VsTest task failed.
But then the report states that it was retried 3 times which 2 of the retries were seccusefull but still a failure status on the azure devops run.
The behavior of the report is the correct one and sadly this can't be configured to be changed.
What you can do is to adjust how the results are reported back to Azure DevOps.
You can configure it via the VSTest element in the srProfile- File.
This example means, that at least one retry has to be passing:
<VSTest testRetryResults="Unified" passRateAbsolute="1"/>
Docs: https://docs.specflow.org/projects/specflow-runner/en/latest/Profile/VSTest.html
Be aware that we have stopped the development of the SpecFlow+ Runner. More details here: https://specflow.org/using-specflow/the-retirement-of-specflow-runner/

SAM Deployment failed Error- Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

When I try to deploy package on SAM, the very first status comes in cloud formation console is ROLLBACK_IN_PROGRESS after that it gets changed to ROLLBACK_COMPLETE
I have tried deleting the stack and trying again, but every time same issue occurs.
Error in terminal looks like this-
Sourcing local options from ./SAMToolkit.devenv
SAM_PARAM_PKG environment variable not set
SAMToolkit will operate in legacy mode.
Please set SAM_PARAM_PKG in your .devenv file to run modern packaging.
Run 'sam help package' for more information
Runtime: java
Attempting to assume role from AWS Identity Broker using account 634668058279
Assumed role from AWS Identity Broker successfully.
Deploying stack sam-dev* from template: /home/***/1.0/runtime/sam/template.yml
sam-additional-artifacts-url.txt was not found, which is fine if there is no additional artifacts uploaded
Replacing BATS::SAM placeholders in template...
Uploading template build/private/tmp/sam-toolkit.yml to s3://***/sam-toolkit.yml
make_bucket failed: s3://sam-dev* An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your previous request to create the named bucket succeeded and you already own it.
upload: build/private/tmp/sam-toolkit.yml to s3://sam-dev*/sam-toolkit.yml
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id sam-dev* does not exist
sam-dev* will be created.
Creating ChangeSet ChangeSet-2020-01-20T12-25-56Z
Deploying stack sam-dev*. Follow in console: https://aws-identity-broker.amazon.com/federation/634668058279/CloudFormation
ChangeSet ChangeSet-2020-01-20T12-25-56Z in sam-dev* succeeded
"StackStatus": "REVIEW_IN_PROGRESS",
sam-dev* reached REVIEW_IN_PROGRESS
Deploying stack sam-dev*. Follow in console: https://console.aws.amazon.com/cloudformation/home?region=us-west-2
Waiting for stack-create-complete
Waiter StackCreateComplete failed: Waiter encountered a terminal failure state
Command failed.
Please see the logs above.
I set SQS as event source for Lambda, but didn't provided the permissions like this
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource: "*"
in lambda policies.
I found this error in "Events" tab of "CloudFormation" service.

How to stop reporting FAILED of systemctl unit

I have an systemctl service unit which has some runtime dependencies which get resolved during boot. Many times it reports "FAILED" state during boot. This service unit has "Restart=always", so ultimately after boot this unit starts successfully. But, during boot around 3-4 times it reports FAILED which I want to avoid.
Is there a way to ignore the "FAILED" state of service unit being reported?
(As I know it will succeed once the dependency is resolved or will keep retrying)
I found that the return value (including error) reported by failure of service unit can be ignored prepending an hyphen while configuring the ExecStart.
From manual:
https://www.freedesktop.org/software/systemd/man/systemd.service.html#BusName=
VIZ:
"-" If the executable path is prefixed with "-", an exit code of the command normally considered a failure (i.e. non-zero exit status or abnormal exit due to signal) is recorded, but has no further effect and is considered equivalent to success.
ExecStart=-/sbin/getty

VSTS Release Agent 'Failed' to deploy to IIS Site

I am in the process of enabling CI/CD in our NP Environments. I have recently come across an issue where I am deploying to our staging environments which consists of two servers.
The first server deploys no problem, but the second server fails (Or so it says). Upon looking at the IIS Site directory the files do actually get released, but the log would indicated otherwise:
2019-04-07T21:07:22.1153713Z Total changes: 231 (229 added, 0 deleted, 2 updated, 0 parameters changed, 53453549 bytes copied)
2019-04-07T21:07:22.1309716Z ##[debug]rc:0
2019-04-07T21:07:22.1309716Z ##[debug]rc:0
2019-04-07T21:07:22.1309716Z ##[debug]success:false
2019-04-07T21:07:22.1309716Z ##[debug]success:false
2019-04-07T21:07:22.1465719Z Error: C:\azagent\A1\_work\_tasks\IISWebAppDeploymentOnMachineGroup_1b467810-6725-4b6d-accd-886174c09bba\0.0.51\node_modules\webdeployment-common\MSDeploy3.6\msdeploy.exe failed with return code: 0
2019-04-07T21:07:22.1465719Z at ChildProcess.<anonymous> (C:\azagent\A1\_work\_tasks\IISWebAppDeploymentOnMachineGroup_1b467810-6725-4b6d-accd-886174c09bba\0.0.51\node_modules\vsts-task-lib\toolrunner.js:569:30)
2019-04-07T21:07:22.1465719Z at emitTwo (events.js:106:13)
2019-04-07T21:07:22.1465719Z at ChildProcess.emit (events.js:191:7)
2019-04-07T21:07:22.1465719Z at maybeClose (internal/child_process.js:886:16)
2019-04-07T21:07:22.1465719Z at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)
2019-04-07T21:07:22.1465719Z Retrying to deploy the package.
I've tried a couple of things:
Making sure the user the release service is running under has the correct permissions to manage that directory.
I have reinstalled the Release Agent.
Not sure what else to try or check. Any suggestions?
So I have discovered what the issue was. Funnily enough it appears the issue has stemmed from the environment variable 'COR_PROFILING_ENABLED' being set to enabled '0x01'. It looks like for whatever reason Dynatrace is configured incorrectly and so a Dynatrace specific error is being thrown during the release process.
Since disabling disabling the release successfully completes.