We're using Capistrano/Webistrano (with Lee Hambley's railsless-deploy gem) to push our PHP application to production servers. I have some custom tasks that get run during various parts of the deploy process.
As an example, I have tasks that attempt to stop and restart a jetty solr instance. However, sometimes this bit fails during the deploy, so Capistrano rolls back the entire deploy and reverts back to the previous revision. This is a pain. :-)
I'd like to tell Capistrano to ignore the return result of these tasks, so if they fail, Capistrano continues on it's way and finishes the deploy anyway. It's very easy for me to ssh to the server after the fact and properly kill and restart the solr instance, rather than having to do a complete deploy again.
Here is some relevant parts of the deploy script:
before "deploy:symlink", :solr_kill
after "deploy:symlink", :solr_start, :solr_index
task :solr_kill do
run "cd #{current_path}/Base ; #{sudo} phing solr-kill"
end
task :solr_start do
run "cd #{current_path}/Base ; #{sudo} phing solr-start"
run "sleep 10"
end
task :solr_index do
run "#{sudo} #{current_path}/Base/Bin/app.php cron run solr_index_cron"
end
from the Capistrano Task docs there is a config you can add to if there is an error, to continue.
task :solr_start, :on_error => :continue do
# your code here
end
Just add that to each task you want to ignore errors and continue. Though, the best possible thing is to see if you can figure out what is causing the failure and have the restart command be more robust to really restart it. I only say this, since when you try to hand off the script to someone else, they might not know exactly how to tell if it restarted correctly.
Related
Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)
you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes
I'm new to the whole Azure DevOps world and just got transferred to a new team that does just that.
One of my assignments is to fix an issue with a pipeline where one of the steps runs a shell script that installs an application. Currently, the step seems to run without any issue shown on the log, but when we connect to the container's pod, the app is not there.
If we run the script directly inside the pod, the application is installed correctly. I'm not sure how to track this. One of the things I've tried was to check the event log to see if there's any error while the installation is executed:Get-Eventlog -LogNmae "Windows PowerShell" -Newest 20, so far no luck here. Again, kinda of new at this, not sure what other tools are out there to track the reason why the script is not installing during the pipeline execution.
To troubleshoot your pipeline run, you can configure your pipeline logs to be more verbose.
1, To configure verbose logs for a single run, you can start a new build by choosing Run pipeline and selecting Enable system diagnostics, Run.
2,To configure verbose logs for all runs, you can add a variable named system.debug and set its value to true.
You can also try logging into your agent server and check for the event log. See this blog for view event log on windows.
The issue was related to how the task was awaited. Adding this piped params helped us solve the issue:
RUN powershell C:\dev\myprocess.ps1 -PassThru | Wait-Process;
I'm trying to deploy images via MDT that have been upgraded via the MDT "Standard Client Upgrade" task sequence. My images started as Win10 v1607 images and are updated to v1703 and then captured.
When I go to deploy the captured images, I'll get a popup on first login that c:\LTIBootstrap.vbs can't be found. Digging, I discovered that after the OS is installed and the PC restarts, the MDT task sequence continues running as the SYSTEM account . This is bizarre as it typically runs as the built-in Administrator account.
For some reason, even though the unattend.xml file contains the usual AutoAdminLogon entries, a registry key at
HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\SystemAutoLogon
is being created and set to 1 during the deployment. (I discovered this by comparing the registries at the end of deployment.) This key is not present in the captured image. This key does not get created if I deploy an image that is manually updated to v1703 (via Windows Update instead of MDT).
Any ideas on why the unattend.xml could be ignored or what would cause SystemAutoLogon to get created and set?
I figured out what was going on.
The MDT Upgrade task sequence invokes the upgrade with the command line /postoobe option pointing to setupcomplete.cmd. This causes the file to be copied to c:\windows\setup\scripts\setupcomplete.cmd. When windows install is complete, if a file is present at that location, it is run under the SYSTEM account.
The problem is that this file remains even after the upgrade task sequence is totally complete. So if you then capture the image and deploy it to a real machine, it will see setupcomplete.cmd and run it after the deploy, instead of using the usual default Administrator account.
I imagine the presence of this file at c:\windows... is what causes the registry changes mentioned above. setupcomplete.cmd is only built to bootstrap an upgrade back into the MDT task sequence, and needs to be removed from c:\windows... when the task sequence is done running.
Knowing that the post-upgrade portion of the upgrade task-sequence runs as SYSTEM instead of Administrator via a very different mechanism than standard deployment is important, as there are then limits to what you can do. By default the sequence lets you install applications.. they need to be apps that are ok being installed by SYSTEM.
For now I've updated my local SetupComplete.cmd in my scripts directory to delete itself when it is done by changing the last for loop to this (there was also a typo in the for loop before preventing the exit echo):
for %%d in (c d e f g h i j k l m n o p q r s t u v w x y z) do if exist %%d:\Windows\Setup\Scripts\setupcomplete.cmd (
del /q /f %%d:\Windows\Setup\Scripts\setupcomplete.cmd
echo %DATE%-%TIME% Exiting SetupComplete.cmd >> %WINDIR%\Temp\setupcomplete.log)
After thinking about this more and hitting issues due to running as the SYSTEM account, I started playing with avoiding running as the SYSTEM account. (One big problem is that if you want to shutdown at the end of the task sequence right after a reboot occurs, SYSTEM starts running too fast, and the call to shutdown in MDT fails.)
The idea is to instead use SetupComplete.cmd running as SYSTEM to simply bootstrap back into running the task sequence as the default Administrator.
There are a few wrinkles to implementing this. Namely, the synchronous commands that run from unattend.xml during a normal install do not run, so things like enabling admin, disabling uac for admin, disable user account page, disable async run once all have to be invoked manually. Beyond that, it is just a matter of setting the right registry entries by calls to PopulateAutoAdminLogon and SetStartMDT via a step in the task sequence after the OS upgrade is complete, and then performing a restart. This seems to work pretty well. The ideal way to do this would be to have the same script that calls PopulateAutoAdminLogon/SetStartMDT also parse unattend.xml and run those commands.
For some reason shell hiding does not work even though everything is set for it. My best guess is that the task sequence runner is doing this because IsOSUpgrade is set, but am not sure.
With this approach, SetupComplete.cmd is just responsible for a single bootstrap back into the task sequence, and the task sequence can delete it at the same time that it calls a script to do PopulateAutoAdminLogon/SetStartMDT
There is enough work to be done to fully polish this approach that I'll just workaround the one autologin issue for now, but it really does feel like a better way for MDT to work when doing upgrades. Hopefully they'll flesh it out in the future.
I have a Jenkins job where I want to
- build my application
- start the jboss via batch
- sleep some time to wait for the jboss
- do some junit tests
- stop the jboss
The problem I have is that the job does not proceed after the jboss start. It shows the complete jboss log and just keeps refreshing this log.
So the sleep and junit tests are never executed.
batchcall im using:
cmd.exe /C F:\jboss-5.1.0.GA-jdk6\bin\run.bat -c Servername -Djboss.service.binding.set=ports-05 -Djboss.bind.address=0.0.0.0
I can't use the jenkins jboss management plugin because i have to set java_opts for this specific job.
Any idea how to start the Jboss without showing the log in the jenkins console?
EDIT :
Thanks for your answer, but call/start didn't work for me either.
My working solution:
(not nice but it works, just thought i should share it)
I created a 2nd Jenkins job which starts the JBoss with the batch call from above.
Then changed this job to be triggered remotely. "Trigger builds remotely"
Now i changed my 1st job to trigger the 2nd in a build step "Execute batch command"
wget --spider build_trigger_url
So my Job is doing this now:
build my application
trigger the jboss jenkins job via wget
this 2nd job is now also running on jenkins, until it is manually shut down
sleep some time, until the jboss is started
execute junit tests
stop the jboss
via jboss management plugin, this kills the 2nd job
You should change it to cmd.exe /C call F:\jboss-5.1.0.GA-jdk6\bin\run.bat <whatever params>
When you trigger a .bat it passes control to it and runs it until the .bat terminates. Instead, you need to spawn off another process to run the .bat. This is done with call command.
I have two remote machines: A and B. Both have Jenkins installed.
A: This will build from trunk.
B: To trigger automation.
How can I configure Jenkins job on Machine B, when the build is successful on Machine A?
I had the same requirement because one of the servers that I was using belonged to a different company and therefore, while it would be possible, it was clearly going to take a long time to get buy-in for me to alter their jenkins set-up, even though I was allowed access to monitor it and its outputs. However, if you don't have these restrictions, then you should definitely follow the whole master-slave configuration to address this. That said, here is a solution I came up with and just to note that I've explained why this was a genuine requirement, although I hope to go down the master-slave route myself when possible.
Install the ScriptTrigger plug-in for Jenkins and you can then watch the remote jenkins instance with a script similar to the following:
LAST_SUCCESSFUL_UPSTREAM_BUILD=`curl http://my.remote.jenkins.instance.com:8080/job/remoteJobName/lastSuccessfulBuild/buildNumber`
LAST_KNOWN_UPSTREAM_BUILD=`cat $WORKSPACE/../lastKnownUpstreamBuild || echo 0`
echo $LAST_SUCCESSFUL_UPSTREAM_BUILD> $WORKSPACE/../lastKnownUpstreamBuild
exit $(( $LAST_SUCCESSFUL_UPSTREAM_BUILD > $LAST_KNOWN_UPSTREAM_BUILD ))
Get the ScriptTrigger to schedule a build whenever the exit code is '1'. Set-up a suitable polling interval and there you have it.
This will obviously only schedule a build if the upstream job succeeds. Use "lastBuild" or "lastFailedBuild" instead of "lastSuccessfulBuild" in the URL above as your requirements dictate.
NOTE: Implemented using a BASH shell. May work in other UNIX shells, won't work in Windows.