How can I assign a forced build to a particular build slave? - buildbot

I have a builder to which several build slaves are assigned since the machines are roughly homogeneous. I have a force scheduler setup on this builder as well. Sometimes, I would like to force the build to run on a particular build slave. Is this possible?
E.g., can I use the name and value fields of the force build form on the builder's status page?

use BuildSlaveChoiceParameter() in your force build scheduler inside of master.cfg.
Below is an example from the docs at http://docs.buildbot.net/0.8.12/manual/cfg-schedulers.html#buildslavechoiceparameter
from buildbot.plugins import util
# schedulers:
ForceScheduler(
# ...
properties=[
BuildslaveChoiceParameter(),
]
)
# builders:
BuilderConfig(
# ...
canStartBuild=util.enforceChosenSlave,
)
This gives you a pulldown on the web interface which has the slaves assigned to this scheduler as choices.

Related

How to pull code from different branches at runtime and pass parameter to NUnit.xml file?

We recently moved Java(TestNG) to C#.Net (NUnit). Sametime migrated to Jenkins to Team-city. Currently we are facing some challenges while we configuring the new build pipeline in Team-City.
Scenario 1: Our project has multiple branches, we generally pull code from different Git-branches then trigger the automation.
In Jenkins we used to create build-parameter(list), when user try to execute the Job, he/she select the branch-name from the list (build-parameters), then git will pull code user selected branch then trigger execution.
Can you please help how to implement a similar process in Team-City?
How to configure the default value in the list parameter?
Scenario 2: In Jenkins build-parameter use used to pass to (TestNG.xml). eg: browser, environment. When the user select browser and environment from build parameters, when execution trigger TestNG pull those values and initiate the regression.
How should create build parameters (browser, envi.) and pass those
values to NUnit/ config file?
Thanks
Raghu

Tag resources when registering to the environment

I have a pipeline with multiple stages that deploys groups of virtual machines.
And registers one to and azure pipelines environment.
Then I want to target that registered VM in a deployment job.
I have a problem to target that resource by name as the resource does not exists in the environment at queue time so I cannot even disable the stage before running.
So my next option is targeting by tags.
But I saw no option in the registration script to define tags at registering time.
So my pipeline flow has a manual step between stages to go to the environment and tag the resource.
Then I can trigger the deployments stage of the pipeline and it continues ok.
So my question is:
There is any way of disabling the resource evaluation at queuetime or anyway to tag resoureces in the environment programmatically?
Thanks
But I saw no option in the registration script to define tags at
registering time.
When running the Registration script, there will be a step: Enter Environment Virtual Machine resource tags? (Y/N) (press enter for N), at this time you need to enter Y, and then in the next step: Enter Comma separated list of tags (e.g web, db) define the tag for the resource.
Update:
You can add --addvirtualmachineresourcetags --virtualmachineresourcetags "<tag>" to the registration script.
You can refer to this case for details.

How can I make chef restart a service with additional parameters passed in?

I have a template for a Rails site for Sphinx configuration. There can be multiple different Sphinx services on the same machine running on different ports, one per app. Therefore, I I only want to restart Sphinx for each site if their corresponding configuration template changes. I've created an /etc/init.d/sphinx script that restarts just one sphinx based on a parameter similar to:
/etc/init.d/sphinx restart /etc/sphinx/site1.conf
Where site1.conf is defined by a Chef template. I'd really love to use the notifies functionality for Chef Templates to pass in the correct site1.conf parameter if the template changes. Is this possible?
Alternatively, I suppose I could just register a different service for each site similar to:
/etc/init.d/sphinx_site1
However, I'd prefer to pass in the parameters to the script instead.
When defining a service resource, you can customize the start, stop, and restart commands that will be run. You can define a service resource for each site that you have using these customized commands and set up their corresponding notifications.
For example:
service "sphinx_site1" do
supports :restart => true
restart_command "/etc/init.d/sphinx restart /etc/sphinx/site1.conf"
action :nothing
end
template "/etc/sphinx/site1.conf" do
notifies :restart, "service[sphix_site1]"
end

trigger other configuration and send current build status with Jenkins

In a certain Jenkins config, I wish to trigger another configuration as post build action.
I want to pass as one of the parameters the current build status.
IE: a string / int that represents the status (SUCCESS/FAIL/UNSTABLE).
I have 2 options to create post build triggers:
Using the join plugin
Using the trigger parameterized build in post build actions
I wish there was some kind of accessible env variable at end of run...
Have any idea?
Thanks!
Here is a simple solution that will answer most cases:
Use 'Trigger Parameterized Build' plugin, and set two triggers -
'Stable or unstable but not fail'
'Fail'
Each of those triggers should run the same job - let's call it 'JOB_B'.
For each of the triggers, pass whatever parameters you like, and also pass a user-defined value:
for trigger '1' use: JOB_A_STATUS=SUCCESS
for trigger '2' use: JOB_A_STATUS=FAIL
Now, all you need to do is test the value of ${JOB_A_STATUS} from JOB_B, to see if it is set to 'SUCCESS' or 'FAIL'.
Note this solution does not distinguish between 'stable' and 'unstable', but only knows the difference between 'fail' and 'success'.
Good luck!
You can check for status using Groovy script in post-build step via Groovy Post-Build plugin that can access Jenkins internals via Jenkins Java API. The plugin provides the script with variable manager that can be used to access important parts of the API (see Usage section in the plugin documentation).
For example, this is how you can output build result to build console:
def result = manager.build.result
manager.listener.logger.println "And the result is: ${result}"
Now, you can instead use that value to create a properties file and pass that file to Parameterized Trigger post-build step (it has such an option).
One caveat: I am not sure if it is possible to arrange post-build steps to execute in a particular order to ensure that Groovy post-build step runs before the Parameterized Trigger step.

Support for multiple repositories using Buildbot

Currently Buildbot does not support multiple repositories. If one desires to have this then separate instances of Buildbot need to be run.
Still I'm curious if anyone has come up with a creative workaround to get this feature working anyway.
Update
This answer received a few downvotes recently, please note that this answer applies to the releases of buildbot that were published/used around the end of 2012/beginning of 2013 and may not be applicable for future versions.
Original Answer
As #Macke said, buildbot (>= 0.8.x) supports multiple projects/repositories. This is done with configuration like the following:
# Set configuration to watch the Git repository for possible
# changes. When a change does occur the schedulers will be
# notified with the project data (TestProj).
c['change_source'] = []
c['change_source'].append(
GitPoller(
repourl ='git://github.com/SO/my_test_project.git',
project = 'TestProj',
branch = 'master',
workdir = '/home/buildmaster/repos/TestProj'
)
)
# Set the schedule to run on each change, but only for the project
# specified above via the project information.
c['schedulers'] = []
c['schedulers'].append(
SingleBranchScheduler(
name = "TestProj-master",
builderNames = ['TestProj-master-builder'],
change_filter = ChangeFilter(
project = 'TestProj',
branch = 'master'
)
)
)
You can see that the project parameter in the change source is then used again in the scheduler's change_filter property to ensure that the scheduler only responds to that particular change source. This allows you to configure multiple change sources and multiple schedulers responding to explicitly chosen change sources.
Since the 0.8.7p1 release, buildbot supports multiple codebases
Indeed i don't get the reason why you say that it does not support multiple repositories....you can create a poller for each repository and multiple schedulers that ping the different pollers and get the builds for many different repositories (either on the same machine where the master runs, or you can have a dedicated slave on a different box).
You want to avoid to have multiple instances, but for example, master and slave coexist on the same machine even if is a pain to start and stop them in order, otherwise you get conflict errors :)
|> Currently Buildbot does not support multiple repositories.
I don't really understand the question.. sorry. Do you mean that you have to run multiple master servers? It is actually advised by the buildbot devs to do so, but the contrary works for me: you can have in the same master.cfg multiple slaves (columns in the waterfall) and for each or them a BuildFactory with different first steps of the type: Git(repourl=...) and/or Mercurial(repourl=...) etc.
Each will clone/pull from different repositories and you can even add some more checkouts that are needed in subsequent steps (using maven or directly your scm client). The only issue with having a unique master.cfg file is that all builders will have only one method for getting notifications of changes; we have for example PBChangeSource() (master is notified by remote code, it has nothing to do). If for instance you have an SCM with good PBChangeSource support (e.g., svn, hg, git) and an other ones with bad support (e.g., MKS) then you should have two master server instances in order to cope with that.
Hope it'll help.