How can I have Jenkins run a script on another server and watch over the result? - deployment

I want Jenkins to be able to kick off a deployment process on another server, but I want a process on that server to actually do the deplyoment process (for security reasons e.g. rather than having a user on the jenkins server ssh to the remote server and perform the deployment) and report the status of the deployment back to Jenkins.
It looks like the way to do this is to leverage Jenkins ability to have slave processes which can run on other servers, executing scripts and reporting console output, status, etc back to the master server.
So the sequence would be:
jenkins (master)
1. build ->
2. create artifacts ->
3. kick off deployment (using ssh kick off of slave job)jenkins (slave)
4. perform deployment ->
5. report status back to master jenkins server
Is this the appropriate way to achieve what I want to do?

This is basic "Client - Server" behavior in Jenkins -
see Managing Jenkins > Managing Nodes for that.
A) Install a Jenkins-agent on the remote machine.
B) Make this agent run "Tied Jobs" only.
C) Set step '4.' in your example to run on the remote Agent (or 'Node').
D) Step '3.' in your example should probably be a 'post-build-trigger' of step '2.' (and not a step of its own).
The result of step '4.' in your example will be available to the server-machine.

Related

Agent version 2.173.0 fails to connect to Azure DevOps

Agent Version and Platform
2.173.0
on
centos-release-7-6.1810.2.el7.centos.x86_64
It's a release agent for a deployment pool.
Azure DevOps Type and Version
dev.azure.com (cloud)
What's not working?
# Running run once with agent version 2.160.1
./run.sh --once
Scanning for tool capabilities.
Connecting to the server.
2020-08-25 21:31:02Z: Listening for Jobs
Agent update in progress, do not shutdown agent.
Downloading 2.173.0 agent
Waiting for current job finish running.
Generate and execute update script.
Agent will exit shortly for update, should back online within 10 seconds.
‘/root/azagent/_diag/SelfUpdate-20200825-213148.log’ -> ‘/root/azagent/_diag/SelfUpdate-20200825-213148.log.succeed’
Scanning for tool capabilities.
Connecting to the server.
# this now runs indefinitely
Is there a way to stop the auto update? Multiple agents on production machines are offline and I have, as of now, no idea how to fix that.
agent.log
Edit: It is a Release Agent in a Deployment Group. Also, there is a Github issue now https://github.com/microsoft/azure-pipelines-agent/issues/3093
To resolve the Authentication failed with status code 401 you can try steps below:
1.Create a new PAT with manage permission:
Then reconfigure the agent with config.sh file.
2.If that not works, try creating a new Agent pool to register new agents:
To stop the auto update, you should disable this option (Organization settings=>Agent Pools=>Settings):

Workload Scheduler job won't enable

I'm trying to create a Workload Scheduler job that executes a curl command.
In Steps I've selected Start a program for the step and RP_CLOUD as the Agent(it's the only option). I pasted my curl command into Program.
Now when I try to enable the job I get a popup saying <b>AWSUI4177E</b><br />AWSUI4177E Unable to update the Process.<br /><i>AWSUI4299E An internal error has occurred: AWSPRE001E The user "paul.carron#anaeko.com.5c81ed484ccf4c54aa9e348e" cannot create a job of type "executable" on the "RP_CLOUD" workstation. Download and install a Workload Automation Agent on a different machine.</i>.
The curl statement works when executd in my Terminal. What am I doing wrong?
There are some security constrains on running jobs on the agents provided by the infrastructure.
I see two options:
Use the restful job type (since you are invoking a curl command)
Install an agent

How to trigger a build within a build chain after x days?

I am currently using Teamcity to deploy a web application to Azure Cloud Services. We typically deploy using powershell scripts to the Staging Slot and thereafter do a manual swap (Staging to Production) on the Azure Portal.
After the swap, we typically leave the Staging slot active with the old production deployment for a few days (in the event we need to revert/backout of the deployment) and thereafter delete it - this is a manual process.
I am looking to automate this process using Teamcity. My intended solution is to have a Teamcity build kick off x days after the deployment build has suceeded (The details of the build steps are irrelevant since I'd probably use powershell again to delete the staging slot)
This plan has pointed me to look into Teamcity build chains, snapshot dependencies etc.
What I have done so far is
correctly created the build chain by creating a snapshot dependency on the deployment build configuration and
created a Finish Build Trigger
At the moment, the current approach kickoffs the dependent build 'Delete Azure Staging Web' (B) immediately after the deployment build has succeeded. However, I would like this to be a delayed build after x days.
Looking at the above build chain, I would like the build B to run on 13-Aug-2016 at 7.31am (if x=3)
I have looked into the Schedule Trigger option as well, but am slightly lost as to how I can use it to achieve this. As far as I understand, using a cron expression will result in the build continuously running which is not what I want - I would like for the build B to only execute once.
Yes this can be done by making use of the REST api.
I've made a small sample which should convey the fundamental steps. This is a PowerShell script that will clear the triggers on another build configuration (determined by the parameter value in the script) and add a scheduled trigger with a start time X days on from the current time (determined by the parameter value in the script)
1) Add a PowerShell step to the main build, at the end and run add-scheduled-trigger as source code
2) Update the parameter values in the script
$BuildTypeId - This is the id of the configuration you want to add the trigger to
$NumberOfDays - This is the number of days ahead that you want to schedule the trigger for
There is admin / admin embedded in the script = Username / Password authentication for the REST api
One this is done you should see a scheduled trigger created / updated each time you build the first configuration
Hope this helps

Jenkins-How to schedule a jenkins job when another job completes on remote machine

I have two remote machines: A and B. Both have Jenkins installed.
A: This will build from trunk.
B: To trigger automation.
How can I configure Jenkins job on Machine B, when the build is successful on Machine A?
I had the same requirement because one of the servers that I was using belonged to a different company and therefore, while it would be possible, it was clearly going to take a long time to get buy-in for me to alter their jenkins set-up, even though I was allowed access to monitor it and its outputs. However, if you don't have these restrictions, then you should definitely follow the whole master-slave configuration to address this. That said, here is a solution I came up with and just to note that I've explained why this was a genuine requirement, although I hope to go down the master-slave route myself when possible.
Install the ScriptTrigger plug-in for Jenkins and you can then watch the remote jenkins instance with a script similar to the following:
LAST_SUCCESSFUL_UPSTREAM_BUILD=`curl http://my.remote.jenkins.instance.com:8080/job/remoteJobName/lastSuccessfulBuild/buildNumber`
LAST_KNOWN_UPSTREAM_BUILD=`cat $WORKSPACE/../lastKnownUpstreamBuild || echo 0`
echo $LAST_SUCCESSFUL_UPSTREAM_BUILD> $WORKSPACE/../lastKnownUpstreamBuild
exit $(( $LAST_SUCCESSFUL_UPSTREAM_BUILD > $LAST_KNOWN_UPSTREAM_BUILD ))
Get the ScriptTrigger to schedule a build whenever the exit code is '1'. Set-up a suitable polling interval and there you have it.
This will obviously only schedule a build if the upstream job succeeds. Use "lastBuild" or "lastFailedBuild" instead of "lastSuccessfulBuild" in the URL above as your requirements dictate.
NOTE: Implemented using a BASH shell. May work in other UNIX shells, won't work in Windows.

Interactive service Logged in as user

So we are trying to setup a Continuous Integration server at my company. What we need to do is svn update the working copy on the server, then build it, start the site using IIS express and then run Watin/Specflow tests on it. I'm using rake inside of CCNet to automate all of this. We are running CCNet as a service and logging in as a build agent because svn uses our domain login credentials in order to authenticate. I've been unable to call the command line "svn update --username user --password pass" because of this. Yet Watin needs to be run in an interactive mode, and the service won't let me . I'm able to get it to work if we manually log on to the server and run ccnet as command line. Unfortunately the Build Agent also logs out of that user account, closing any command lines with it (I don't know why they need it to do this but they do). So is it possible to run a service in interactive mode if its signed in as a user?
If you have access to two servers you can build (can also work from computer to server)
Automated remote desktop - in windows form
see this post http://www.codeproject.com/Articles/43705/Remote-Desktop-using-C-NET
from one server to log into the server you need to run the Watin tests on and in the scheduled task, have the tests to come on after the log in has happened. This then gives the impression that the service is interacting with the desktop.
If you need any more information let me know