Serving multiple repos with hg serve. How? - version-control

The wiki mentions it's possible to do this under hg serve, but there aren't any examples (such as a sample webdir-conf file). Yes I know it would be better to do this all under Apache, but this is a local machine and hg serve just makes sense for us.

As you've hinted at you use the hg serve --webdir-conf FILE invocation and the webdir.conf format is the same as it is for hgweb.cgi. So those examples apply to you too:
https://www.mercurial-scm.org/wiki/HgWebDirStepByStep#Preparing_the_config
so at your most basic you can do:
[paths]
/repos = /webdata/hg_repos/*
where repos/ is the path on your local system to the directory containing the repositories.
(and you're right it would be much better to take the time to do this under Apache).

use this in your webdir config (for example)
foo.config << EOL
[paths]
power = power/Repo
billable = /path/to/billable/Repo
EOL
hg serve --webdir-conf foo.config
Assuming your repos live in different places...

As an alternative You can use RhodeCode, it's standalone app written in pylons.
"RhodeCode is Pylons framework based Mercurial repository browser/management with build in push/pull server and full text search and permissions system."
A demo can be viewed here.
http://demo.rhodecode.org
Regards

Related

Jenkinsfile - how to access other github files?

I'm performing an api call in my jenkinsfile that requires specifying a path to file 'A'. Assuming file A is located on the same repo, I am not sure how to refer to file A when running the jenkinsfile.
I feel like this has been done before, but I can't find any resource. Any help is appreciated.
You don't say whether you are using a scripted or declaritive Jenkinsfile, as the details differ. However the principle is the same as far as I am concerned. Basically to do anything with a file you will need to be within a node clause - essentially the controller opens a session on one of the agents and does actions there. You need to checkout your repo on that node:
The scripted Jenkinsfile would look something like (assuming you are not bothered about which node you are running on):
node("") {
checkout scm // "scm" equates to the configuration that the job was run with
// the whole repo will be now available
}

Version Control a file with per machine dependent stuff

I have a config file in my project that includes some info that is per machine dependent (db username, password, path). I understand that in this particular case, I could enforce everybody to use the same username, db path, and password to keep this simple, but there must be another way to deal with this problem.
I use mercurial, if you care, but I am ok with just a theoretical answer if you are unfamiliar with hg specifics.
A common way to handle this is to put a config.example or similar under version control and force the user to copy it and make any necessary changes. That way the user can pull down the overall structure of the file from your repository without overwriting local changes.
Alternatively, you could make your config file provide only defaults, with the option to source a subset of variables from a higher-priority custom config file (in the same format) which the user may or may not provide.
You'll want to use the .hgignore file to not include the config file in the repository.
This will allow everyone to have their own version of the config file.
Basically, you just want to add the relative path to the config file and Mercurial commands will ignore it. So the file would look like this:
config/dbconfig.ext
Edit
I just realized you still want to be able to version control the config file (misunderstood the question). So I suggest moving the parts of the config file that are dependent into their own config file and then applying the fix above. That way, you can still have the regular config information under version control and keep part of it separate for each person's machine.
I have per machine databases for my PHP projects. What I do is check the hostname at runtime. If it is one host, I feed it certain credentials. If another, feed it different credentials.
On some systems I create a list of credentials and then just go down the line trying them until one of the connections works. If the list is exhausted, the connection cannot be made.
I've never found a solid method for handling this type of configuration files. My final solution was to just maintain a version of each file and use symbolic links. That way each server has the same file path, but different root file.
Without knowing exactly what is in your config file, I'm going to assume your file has some stuff that is machine-dependent (e.g., db password, paths) and other stuff that is not (db hostname, maybe some paths relative to a path that is configured on a per-machine basis, etc.)
If that's the case, what you want to do is re-factor your config file so that you have two config files---one for the common stuff, one for the machine-specific stuff. Check the common one in, and add the machine-specific configuration to the ignore file.

Support for multiple repositories using Buildbot

Currently Buildbot does not support multiple repositories. If one desires to have this then separate instances of Buildbot need to be run.
Still I'm curious if anyone has come up with a creative workaround to get this feature working anyway.
Update
This answer received a few downvotes recently, please note that this answer applies to the releases of buildbot that were published/used around the end of 2012/beginning of 2013 and may not be applicable for future versions.
Original Answer
As #Macke said, buildbot (>= 0.8.x) supports multiple projects/repositories. This is done with configuration like the following:
# Set configuration to watch the Git repository for possible
# changes. When a change does occur the schedulers will be
# notified with the project data (TestProj).
c['change_source'] = []
c['change_source'].append(
GitPoller(
repourl ='git://github.com/SO/my_test_project.git',
project = 'TestProj',
branch = 'master',
workdir = '/home/buildmaster/repos/TestProj'
)
)
# Set the schedule to run on each change, but only for the project
# specified above via the project information.
c['schedulers'] = []
c['schedulers'].append(
SingleBranchScheduler(
name = "TestProj-master",
builderNames = ['TestProj-master-builder'],
change_filter = ChangeFilter(
project = 'TestProj',
branch = 'master'
)
)
)
You can see that the project parameter in the change source is then used again in the scheduler's change_filter property to ensure that the scheduler only responds to that particular change source. This allows you to configure multiple change sources and multiple schedulers responding to explicitly chosen change sources.
Since the 0.8.7p1 release, buildbot supports multiple codebases
Indeed i don't get the reason why you say that it does not support multiple repositories....you can create a poller for each repository and multiple schedulers that ping the different pollers and get the builds for many different repositories (either on the same machine where the master runs, or you can have a dedicated slave on a different box).
You want to avoid to have multiple instances, but for example, master and slave coexist on the same machine even if is a pain to start and stop them in order, otherwise you get conflict errors :)
|> Currently Buildbot does not support multiple repositories.
I don't really understand the question.. sorry. Do you mean that you have to run multiple master servers? It is actually advised by the buildbot devs to do so, but the contrary works for me: you can have in the same master.cfg multiple slaves (columns in the waterfall) and for each or them a BuildFactory with different first steps of the type: Git(repourl=...) and/or Mercurial(repourl=...) etc.
Each will clone/pull from different repositories and you can even add some more checkouts that are needed in subsequent steps (using maven or directly your scm client). The only issue with having a unique master.cfg file is that all builders will have only one method for getting notifications of changes; we have for example PBChangeSource() (master is notified by remote code, it has nothing to do). If for instance you have an SCM with good PBChangeSource support (e.g., svn, hg, git) and an other ones with bad support (e.g., MKS) then you should have two master server instances in order to cope with that.
Hope it'll help.

Merging 2 clearcase views on different Servers?

I'm in a bit of a pickle...
I work on a project that is multi-site. Unfortunately, the VOB sync between the two sites is not working properly right now, and our Clearcase Admins are too busy doing other work to get it fixed.
I need to take code from a Dynamic View on one server and merge it to a Dynamic View on another server.
Usually we check everything in, label it, and then once the VOB syncs merge from the label on the other side.
Any tips or tricks on how to do this merge?
Ok, here's what I've got so far:
- I made sure that my source view & my target view were based on the same (slightly older) label that had synced properly.
Running the following command tells me what files have changed in my branch on the 1st server:
ct find . -version 'version (.../branch-name/LATEST)' -nxn -print
Running this command will give me a GNU style diff against the labeled version:
ct diff -diff FILENAME `cleartool find FILENAME -version 'lbtype(LABEL)' -print`
Now I need to chain these together to create a Patchset file than I can then use GNU Merge to merge into the 2nd view that's based on the same label.
You need to get the data back somehow from the other site of the replicated environment.
if the mkreplica did work, but the ship process failed, you could try to ask for a shared file replica, which could then be imported (see mkreplica help, section Imports).
multitool mkreplica –export –workdir /tmp/ms_workdir –c "make a new replica for sanfran_hub" –out /tmp/sanfran_hub_packet
multitool mkreplica –import –workdir /tmp/ms_workdir –tag /vobs/dev –vob /net/goldengate/vobstg/dev.vbs –preserve –c "create sanfran_hub replica" /tmp/sanfran_hub_packet
But if your CC admins are that busy, all there is left is the "replica of the poor":
some kind of zip, and a merge with a third party tool between your local view and said zip.
I am sure you could extract any relevant data from a source dynamic view which would not be up-to-date anyway.
Admins finally got around to cleaning it up before I could finish my solution, so don't need this anymore. Hopefully they will keep it up and running.

What's the best Perl module for hierarchical and inheritable configuration?

If I have a greenfield project, what is the best practice Perl based configuration module to use?
There will be a Catalyst app and some command line scripts. They should share the same configuration.
Some features I think I want ...
Hierarchical Configurations to cleanly maintain different development and live settings.
I'd like to define "global" configurations once (eg, results_per_page => 20), have those inherited but override-able by my dev/live configs.
Global:
results_per_page: 20
db_dsn: DBI:mysql;
db_name: my_app
Dev:
inherit_from: Global
db_user: dev
db_pass: dev
Dev_New_Feature_Branch:
inherit_from: Dev
db_name: my_app_new_feature
Live:
inherit_from: Global
db_user: live
db_pass: secure
When I deploy a project to a new server, or branch/fork/copy it somewhere new (eg, a new development instance), I want to (one time only) set which configuration set/file to use, and then all future updates are automatic.
I'd envisage this could be achieved with a symlink:
git clone example.com:/var/git/my_project . # or any equiv vcs
cd my_project/etc
ln -s live.config to_use.config
Then in the future
git pull # or any equiv vcs
I'd also like something that akin to FindBin, so that my configs can either use absolute paths, or relative to the current deployment. Given
/home/me/development/project/
bin
lib
etc/config
where /home/me/development/project/etc/config contains:
tmpl_dir: templates/
when my perl code looks up the tmpl_dir configuration it'll get:
/home/me/development/project/templates/
But on the live deployment:
/var/www/project/
bin
lib
etc/config
The same code would magically return
/var/www/project/templates/
Absolute values in the config should be honoured, so that:
apache_config: /etc/apache2/httpd.conf
would return "/etc/apache2/httpd.conf" in all cases.
Rather than a FindBin style approach, an alternative might be to allow configuration values to be defined in terms of other configuration values?
tmpl_dir: $base_dir/templates
I'd also like a pony ;)
Catalyst::Plugin::ConfigLoader supports multiple overriding config files. If your Catalyst app is called MyApp, then it has three levels of override: 1) MyApp.pm can have a __PACKAGE__->config(...) directive, 2) it next looks for MyApp.yml in the main directory of the app, 3) it looks for MyApp_local.yml. Each level may override settings in each other level.
In a Catalyst app I built, I put all of my immutable settings in MyApp.pm, my debug settings in MyApp.yml, and my production settings in MyApp_<servertype>.yml and then symlinked MyApp_local.yml to point at MyApp_<servertype>.yml on each deployed server (they were all a little different...).
That way, all of my config was in SVN and I just needed one ln -s step to manually config a server.
Perl Best Practices warns against exactly what you want. It states that config files should be simple and avoid the sort of baroque features you desire. It goes on to recommend three modules (none of which are Core Perl): Config::General, Config::Std, and Config::Tiny.
The general rational behind this is that the editing of config files tends to be done by non-programmers and the more complicated you make your config files, the more likely they will screw them up.
All of that said, you might take a look at YAML. It provides a full featured, human readable*, serialization format. I believe the currently recommend parser in Perl is YAML::XS. If you do go this route I would suggest writing a configuration tool for end users to use instead of having them edit the files directly.
ETA: Based on Chris Dolan's answer it sounds like YAML is the way to go for you since Catalyst is already using it (.yml is the de facto extension for YAML files).
* I have heard complaints that blind people may have difficulty with it
YAML is hateful for config - it's not non-programmer friendly partly because yaml in pod is by definition broken as they're both white-space dependent in different ways. This addresses the main problem with Config::General. I've written some quite complicated config files with C::G in the past and it really keeps out of your way in terms of syntax requirements etc. Other than that, Chris' advice seems on the money.