Support for multiple repositories using Buildbot - buildbot

Currently Buildbot does not support multiple repositories. If one desires to have this then separate instances of Buildbot need to be run.
Still I'm curious if anyone has come up with a creative workaround to get this feature working anyway.

Update
This answer received a few downvotes recently, please note that this answer applies to the releases of buildbot that were published/used around the end of 2012/beginning of 2013 and may not be applicable for future versions.
Original Answer
As #Macke said, buildbot (>= 0.8.x) supports multiple projects/repositories. This is done with configuration like the following:
# Set configuration to watch the Git repository for possible
# changes. When a change does occur the schedulers will be
# notified with the project data (TestProj).
c['change_source'] = []
c['change_source'].append(
GitPoller(
repourl ='git://github.com/SO/my_test_project.git',
project = 'TestProj',
branch = 'master',
workdir = '/home/buildmaster/repos/TestProj'
)
)
# Set the schedule to run on each change, but only for the project
# specified above via the project information.
c['schedulers'] = []
c['schedulers'].append(
SingleBranchScheduler(
name = "TestProj-master",
builderNames = ['TestProj-master-builder'],
change_filter = ChangeFilter(
project = 'TestProj',
branch = 'master'
)
)
)
You can see that the project parameter in the change source is then used again in the scheduler's change_filter property to ensure that the scheduler only responds to that particular change source. This allows you to configure multiple change sources and multiple schedulers responding to explicitly chosen change sources.

Since the 0.8.7p1 release, buildbot supports multiple codebases

Indeed i don't get the reason why you say that it does not support multiple repositories....you can create a poller for each repository and multiple schedulers that ping the different pollers and get the builds for many different repositories (either on the same machine where the master runs, or you can have a dedicated slave on a different box).
You want to avoid to have multiple instances, but for example, master and slave coexist on the same machine even if is a pain to start and stop them in order, otherwise you get conflict errors :)

|> Currently Buildbot does not support multiple repositories.
I don't really understand the question.. sorry. Do you mean that you have to run multiple master servers? It is actually advised by the buildbot devs to do so, but the contrary works for me: you can have in the same master.cfg multiple slaves (columns in the waterfall) and for each or them a BuildFactory with different first steps of the type: Git(repourl=...) and/or Mercurial(repourl=...) etc.
Each will clone/pull from different repositories and you can even add some more checkouts that are needed in subsequent steps (using maven or directly your scm client). The only issue with having a unique master.cfg file is that all builders will have only one method for getting notifications of changes; we have for example PBChangeSource() (master is notified by remote code, it has nothing to do). If for instance you have an SCM with good PBChangeSource support (e.g., svn, hg, git) and an other ones with bad support (e.g., MKS) then you should have two master server instances in order to cope with that.
Hope it'll help.

Related

Jenkinsfile - how to access other github files?

I'm performing an api call in my jenkinsfile that requires specifying a path to file 'A'. Assuming file A is located on the same repo, I am not sure how to refer to file A when running the jenkinsfile.
I feel like this has been done before, but I can't find any resource. Any help is appreciated.
You don't say whether you are using a scripted or declaritive Jenkinsfile, as the details differ. However the principle is the same as far as I am concerned. Basically to do anything with a file you will need to be within a node clause - essentially the controller opens a session on one of the agents and does actions there. You need to checkout your repo on that node:
The scripted Jenkinsfile would look something like (assuming you are not bothered about which node you are running on):
node("") {
checkout scm // "scm" equates to the configuration that the job was run with
// the whole repo will be now available
}

Is there a way in Terraform Enterprise to read the payload from VCS?

I have configured a webhook between github and terraform enterprise correctly, so each time I push a commit, the terraform module gets executed. Why I want to achieve is to use part of the branch name where the push was made and pass it as a variable in the terraform module.
I have read that the value of a variable can be a HCL code, but I am unable to find the correct object to access the payload (or at least, the branch name), so at this moment I think it is not possible to get that value directly from the workspace configuration.
if you get a workaround for this, it may also work from me.
At this point the only idea I get is to call the terraform we hook using an API Call
Thanks in advance
Ok, after several try and error I found out that it is not possible to get any information in the terraform module if you are using the VCS mode. So, in order to be able to get the branch, I got these options:
Use several workspaces
You can configure a workspace for each branch, so you may create a variable a select that branch in each workspace. The problem is you will be repeating yourself with this option
Use Terraform CLI and a GitHub action
I used these fine tutorial from Hashicorp for creating a Github action that uses Terraform Cloud. It gets you done the 99% of the job. For passing a varible you must be aware that there are two methods, using a file or using an enviromental variable (check that information on the Hashicorp site here). So using a:
terraform apply -var="branch=value"
won't work. In my case I used the tfvars approach, so in my Github Action I put this snippet:
- name: Setup Terraform variables
id: vars
run: |-
cat > terraform.auto.tfvars <<EOF
branch = "${GITHUB_REF#refs/*/}"
EOF
I defined a variable within terraform called branch, I was able to get and work with this value

How to read multiple config file from Spring Cloud Config Server

Spring cloud config server supports reading property files with name ${spring.application.name}.properties. However I have 2 properties files in my application.
a.properties
b.properties
Can I get the config server to read both these properties files?
Rename your properties files in git or file system where your config server is looking at.
a.properties -> <your_application_name>.properties
a.properties -> <your_application_name>-<profile-name>.properties
For example, if your application name is test and you are running your application on dev profile, below two properties will be used together.
test.properties
test-dev.properties
Also you can specify additional profiles in bootstrap.properties of your config client to retrieve more properties files like below. For example,
spring:
profiles: dev
cloud:
config:
uri: http://yourconfigserver.com:8888
profile: dev,dev-db,dev-mq
If you specify like above, below all files will be used together.
test.properties
test-dev.properties
test-dev-db.prpoerties
test-dev-mq.properties
Note that the provided answer assumes your property files address different execution profiles. If they dont, i.e., your properties are split into different files for some other reason, e.g., maintenance purposes, divided by business/functional domain, or any other reason that suits your needs, then, by defining a profile for each such file, you are just "abusing" the profile feature, for achieving your goal (multiple property files per app).
You could then ask "OK, so what is the problem with that?". The problem is that you restrain yourself from various possibilities that you would otherwise have. If you actually want to customize your application configuration by profile you will have to create pseudo, sub, profiles for that since the file name is already a profile. Example:
Your application configuration could be customized by different profiles, which you use inside your springboot application (e.g. in #Profile() annotation), let them be dev, uat, prod. You can boot your application setting different profiles as active, e.g. 'dev' vs 'uat', and get the group of properties that you desire. For your a.properties b.properties and c.properties file, if different file names were supported, you would have a-dev.properties b-dev.properties and c-dev.properties files vs a-uat.properties b-uat.properties and c-uat.properties files for 'dev' and 'uat' profile.
Nevertheless, with the provided solution, you already have defined 3 profiles for each file: appname-a.properties appname-b.properties, and appname-c.properties: a, b, and c. Now imagine you have to create a different profile for each... profile(! it already shows something goes wrong here)! you would end up with a lot of profile permutations (which would get worse as files increase): The files would be appname-a-dev.properties, appname-b-dev.properties, app-c-dev.properties vs appname-a-uat.properties, appname-b-uat.properties, app-c-uat.properties, but the profiles would have been increased from ['dev', ' uat'] to ['a-dev', 'b-dev', 'c-dev', 'a-uat', 'b-uat', 'c-uat'] !!!
Even worse, how are you going to cope with all these profiles inside your code and more specifically your #Profile() annotations? Will you clutter the code space with "artificial" profiles just because you want to add one or two more different property files? It should have been sufficient to define your dev or uat profiles, where applicable, and define somewhere else the applicable property file names (which could then be further supported by profile, without any other configuration action), just as it happens in the externalized properties configuration for individual springboot apps
For argument completeness, I will just add here that if you want to switch to .yml property files one day, with the provided profile-based naming solution, you also loose the ability to define different "yaml document sections per profile" inside the same .yml file (Yes, in .yml you can have one property file yet define multiple logical yml documents inside, which its usually done for customizing the properties for different profiles, while having all related properties in one place). You loose the ability because you have already used the profile in the file name (appname-profile.yml)
I have issued a pull request with a minor fix for spring-cloud-config-server 1.4.x, which allows defining additionally supported file names (appart from "application[-profile]" and "{appname}[-profile]", that are currently supported) by providing a spring.cloud.congif.server.searchNames environment property - analogous to spring.config.name for springboot apps. I hope it gets reviewed and accepted.
I came across the same requirement lately with a little more constraint that I am not allowed to play around the environment profiles. So I wasn't allowed to do as the accepted answer. I'm sharing how I did it as an alternative to those who might have same case as me.
In my application, I have properties such as:
appxyz-data-soures.properties
appxyz-data-soures-staging.properties
appxyz-data-soures-production.properties
appxyz-interfaces.properties
appxyz-interfaces-staging.properties
appxyz-interfaces-production.properties
appxyz-feature.properties
appxyz-feature-staging.properties
appxyz-feature-production.properties
application.properties // for my use, contains local properties only
bootstrap.properties // for my use, contains management properties only
In my application, I have these particular properties set that allow me to achieve what I needed. But note I have the rest of needed config as well (enable cloud config, actuator refresh, eureka service discovery and so on) - just highlighting these for emphasis:
spring.application.name=appxyz
spring.cloud.config.name=appxyz-data-soures,appxyz-interfaces,appxyz-feature
You can observe that I didn't want to play around my application name but instead I used it as prefix for my config property files.
In my configuration server I configured in application.yml to capture pattern: 'appxyz-*':
spring:
cloud:
config:
server:
git:
uri: <git repo default>
repos:
appxyz:
pattern: 'appxyz-*'
uri: <another git repo if you have 1 repo per app>
private-key: ${git.appxyz.pk}
strict-host-key-checking: false
ignore-local-ssh-settings: true
private-key: ${git.default.pk}
In my Git repository I have the following. No application.properties and bootstrap because I didn't want those to be published and overridden/refreshed externally but you can do if you want.
appxyz-data-soures.properties
appxyz-data-soures-staging.properties
appxyz-data-soures-production.properties
appxyz-interfaces.properties
appxyz-interfaces-staging.properties
appxyz-interfaces-production.properties
appxyz-feature.properties
appxyz-feature-staging.properties
appxyz-feature-production.properties
It will be the pattern matching pattern: 'appxyz-*' that will capture and return the matching files from my git repository. The profile will also apply and fetch the correct property file accordingly. The prioritization of value is also preserved.
Furthermore, if you wish to add more file in your application (say appxyz-circuit-breaker.properties), we only need to do:
Add the name pattern in the spring.cloud.config.name=...,appxyz-circuit-breaker
The add the copies of the file locally and also externally (in the git repo.
No need to add/modify more or restart your configuration server later on. For new application, it's like a one time registration thing to add an entry under the repos of application.yml.
Hope it helps in one way or another!
In your application bootstrap.properties, you have to specify like below:
spring.application.name=a,b

Serving multiple repos with hg serve. How?

The wiki mentions it's possible to do this under hg serve, but there aren't any examples (such as a sample webdir-conf file). Yes I know it would be better to do this all under Apache, but this is a local machine and hg serve just makes sense for us.
As you've hinted at you use the hg serve --webdir-conf FILE invocation and the webdir.conf format is the same as it is for hgweb.cgi. So those examples apply to you too:
https://www.mercurial-scm.org/wiki/HgWebDirStepByStep#Preparing_the_config
so at your most basic you can do:
[paths]
/repos = /webdata/hg_repos/*
where repos/ is the path on your local system to the directory containing the repositories.
(and you're right it would be much better to take the time to do this under Apache).
use this in your webdir config (for example)
foo.config << EOL
[paths]
power = power/Repo
billable = /path/to/billable/Repo
EOL
hg serve --webdir-conf foo.config
Assuming your repos live in different places...
As an alternative You can use RhodeCode, it's standalone app written in pylons.
"RhodeCode is Pylons framework based Mercurial repository browser/management with build in push/pull server and full text search and permissions system."
A demo can be viewed here.
http://demo.rhodecode.org
Regards

Merging 2 clearcase views on different Servers?

I'm in a bit of a pickle...
I work on a project that is multi-site. Unfortunately, the VOB sync between the two sites is not working properly right now, and our Clearcase Admins are too busy doing other work to get it fixed.
I need to take code from a Dynamic View on one server and merge it to a Dynamic View on another server.
Usually we check everything in, label it, and then once the VOB syncs merge from the label on the other side.
Any tips or tricks on how to do this merge?
Ok, here's what I've got so far:
- I made sure that my source view & my target view were based on the same (slightly older) label that had synced properly.
Running the following command tells me what files have changed in my branch on the 1st server:
ct find . -version 'version (.../branch-name/LATEST)' -nxn -print
Running this command will give me a GNU style diff against the labeled version:
ct diff -diff FILENAME `cleartool find FILENAME -version 'lbtype(LABEL)' -print`
Now I need to chain these together to create a Patchset file than I can then use GNU Merge to merge into the 2nd view that's based on the same label.
You need to get the data back somehow from the other site of the replicated environment.
if the mkreplica did work, but the ship process failed, you could try to ask for a shared file replica, which could then be imported (see mkreplica help, section Imports).
multitool mkreplica –export –workdir /tmp/ms_workdir –c "make a new replica for sanfran_hub" –out /tmp/sanfran_hub_packet
multitool mkreplica –import –workdir /tmp/ms_workdir –tag /vobs/dev –vob /net/goldengate/vobstg/dev.vbs –preserve –c "create sanfran_hub replica" /tmp/sanfran_hub_packet
But if your CC admins are that busy, all there is left is the "replica of the poor":
some kind of zip, and a merge with a third party tool between your local view and said zip.
I am sure you could extract any relevant data from a source dynamic view which would not be up-to-date anyway.
Admins finally got around to cleaning it up before I could finish my solution, so don't need this anymore. Hopefully they will keep it up and running.