Best way for fingerprinting a Scala application - scala

I'm developing a Web application in Scala that we deploy in several testing environments. In order to control which software snapshot is installed, I'd like to include a version fingerprint in the generated .war so I can query it using a REST interface.
I would go in the path of setting a SBT task that retrieves the mercurial repository version, the current project version from the project definition and compose a static string that will be read from that before mentioned service, but is this the right approach?
What are common patterns for getting this functionality?
Regards.

The idea is to generate a file with the right information, and then have an SBT task taking care of including that file information in the generated war.
For the file, you can see the right mercurial command in "How to display current working copy version of an hg repository on a PHP page", as a post-update hook:
[hooks]
post-update = hg id -r > VERSION ; hg id -i >> VERSION
That means you won't have ot can any mercurial command from SBT: the update of the mercurial repo will be enough to trigger the generation of that file.
The comments of that linked answer also mention the possible hg command:
hg log -r . --template "v{latesttag}-{latesttagdistance}-{node|short}\n

Related

Creating a new repo from a directory of old repo

I am very new to bazaar and I am exploring the features of it (and of version control system)
I have a bazaar repo, lets call it 'foo'. Under foo repo I have a directory, lets call it 'projects'.
so, I want to create a separate bazaar repo with only projects directory & I want to retain the log too. I mean to say, everything that is related to project folder present in log file, should be available with this new repo.
I tried export command, but I just got the directory without any log.
Any pointers where I should look ?
You can do this using the fastimport plugin:
bzr fast-export /path/to/orig/project | \
bzr fast-import-filter -i project1/ | \
bzr fast-import - /path/to/new/project1
(I broke the line for readability)
The first command dumps the revisions of the branch at the specified path to standard output
The second command filters the revisions, selecting only the ones that affect the project1/ directory. The trailing / is important.
The third command imports the revisions from the standard input to the specified branch. If the branch does not exist, bzr will create a shared repository with a branch named trunk in it.
For more details, see the help pages:
bzr help fast-export
bzr help fast-import-filter
bzr help fast-import
The fastimport plugin is included in the default installation on Windows and Mac OS X. If you have a more exotic setup, I recommend installing it with pip. I don't remember 100% the package name, maybe bzr-fastimport. You will also need the fastimport python library.

Should I include configure and makefile in a github repository?

We recently moved from subversion to git, and then to Github, for several open source projects. Github was nice in that it provided a lot of functionality. One of the things I particularly like is the ability to download tags as zip or .tar.gz files.
Unfortunately Github recently discontinued downloads. That shouldn't be a problem because of the ability to download tags. However in the past we have not put a Makefile , configure script or any other autoconf-generated files into the repo because they get lots of conflicts when people merge.
What's the proper way to handle this?
Should I put autoconf and automake-generated files in the repo so people can download tags directly?
Or should there be a bootstrap.sh file and people are told to run that?
Or should I just do a make dist and put that into the repo?
Thanks
Publish the output of make dist via GitHub Releases
Your first option—putting the Autoconf- and Automake-generated files into the repository—is not a good idea. It's almost never beneficial to store generated files in source control. In this case, it's going to pollute your history with a lot of unnecessary and potentially conflicting commits, particularly if not all your contributors are using the same version of Autotools. Your third option—checking in the output of make dist—is a bad idea for exactly the same reasons as the first option.
Your second option—adding a "bootstrap" script that calls Autoconf and Automake to generate the configure scripts—is also a bad idea. This defeats the entire purpose of Autotools, which is to make your source portable across systems—including those for which Autotools is not available! (Consider what would happen if someone wanted to build and install your software on a machine on which they don't have root access, and where the GNU Build System is not installed. A bootstrap script is not going to help them because they'd first need to make a local installation of Autotools and possibly all its dependencies.)
The proper way of releasing code that uses Autotools is to produce a tarball with make dist (or better yet, make distcheck, since this will also run tests and do other sanity checks), and then publish this tarball somewhere other than the source repository.
Your original question, from April 2013, states that GitHub discontinued download pages. However, in July 2013, GitHub added a "Releases" feature that not only pre-packages your source tags, but also allows you to attach arbitrary files to each release. So on GitHub, the Releases page is where you should publish your make dist tarballs (and preferably also the detached GnuPG signatures of them).
Basic steps
When you are ready to make a release, tag it and push the tag to GitHub:
$ git tag 1.0 # Also use -s if desired
$ git push --tags
Use your Makefile to produce a tarball:
$ make dist # Alternatively, 'make distcheck'
Visit the GitHub page for your project and follow the "releases" link:
You will be taken to the Releases page for your project. The first time you visit, all you will see is a list of tags and automatically produced tarballs from the source tree:
Press the "Draft a new release" button.
You will then be presented with a form in which you should fill in the Git tag associated with the release and an optional title and description. Below this there is also a file selector labelled "Attach binaries by dropping them here or selecting them". Use this to upload the tarball you created in Step 2 (and maybe also a detached GnuPG signature of it).
When you're done, press the "Publish release" button.
Your project's Releases page will now display the release, including prominent download links for the attached files:
If you don't want to use GitHub Releases, then as pointed out in a previous answer, you should upload the tarballs somewhere else, such as your own website or FTP site. Add a link to this repository from your project's README.md so that users can find it.
The second is better: you want any user of your repo to be up and running as fast as possible, re-generating what he/she needs in order to build your program.
Since Git is very much a version control for text (as opposed to an artifact repo like Nexus), providing a way to generate the final binary is the way to go.
When you cut a release, upload the result of make distcheck to your project's download page: it's a makefile target that builds the tarball and verifies that it installs, uninstalls, passes tests and other sanity checks. Github being wrong-headed isn't an excuse: create a tree like this in your repo:
/
/source
/source/configure.ac
/source/Makefile.am
/source/...
/releases
/releases/foo-0.1.tar.gz
/releases/...
For developers, you should not have generated files in source control. Many modern autotooled projects bootstrap fine off an invocation of autoreconf -i.

How to get revision number from Mercurial repository and paste it to NetBeans resource bundle?

I have a java project in NetBeans and im using Mercurial for version controlling.
I want to see my project version number in about box and i want it to be updated according to Mercurial revision number.
Any ideas how to do it? :)
Following 'Version numbering for auto builds with Mercurial', you can record in a VERSION.TXT file (that you about dialog would display) the result of:
hg log -r . --template '{latesttag}-{latesttagdistance}-{node|short}'
Lazy Badger comments:
log will be a lot better (and correct) with:
hg log -r tip --template "{latesttag}.{latesttagdistance}"
You have more options in "How good is my method of embedding version numbers into my application using Mercurial hooks?"
version_gen.sh with:
hg parent --template "r{node|short}_{date|shortdate}" > version.num
In the makefile, make sure version_gen.sh is run before version.num is used to set the version parameter.
If Windows, MercurialRev ("SubWCRev for Mercurial") may be useful also
Replaces revision information in a tagged text file.
MercurialRev <SourceFile> <DestinationFile> <RepositoryPath>
Tags:
<$HG:REV_NUM$>
<$HG:REV_LMOD_N$>
<$HG:REV_LMOD_P$>
<$HG:REV_ID$>
<$HG:BRANCH$>
<$HG:TAG$>

Is there a way to keep Hudson / Jenkins configuration files in source control?

I am new to Hudson / Jenkins and was wondering if there is a way to check in Hudson's configuration files to source control.
Ideally I want to be able to click some button in the UI that says 'save configuration' and have the Hudson configuration files checked in to source control.
Most helpful Answer
There is a plugin called SCM Sync configuration plugin.
Original Answer
Have a look at my answer to a similar question. The basic idea is to use the filesystem-scm-plugin to detect changes to the xml-files. Your second part would be committing the changes to SVN.
EDIT: If you find a way to determine the user for a change, let us know.
EDIT 2011-01-10 Meanwhile there is a new plugin: SCM Sync configuration plugin. Currently it only works with subversion and git, but support for more repositories is planned. I am using it since version 0.0.3 and it worked good so far.
Note that Vogella has a recent (January 2014, compared to the OP's question January 2010) and different take on this.
Consider that the SCM Sync configuration plugin can generate a lot of commits.
So, instead of relying on a plugin and an automated process, he manages the same feature manually:
Storing the Job information of Jenkins in Git
I found the amount of commits a bit overwhelming, so I decided to control the commits manually and to save only the Job information and not the Jenkins configuration.
For this switch into your Jenkins jobs directory (Ubuntu: /var/lib/jenkins/jobs) and perform the “git init” command.
I created the following .gitignore file to store only the Git jobs information:
builds/
workspace/
lastStable
lastSuccessful
nextBuildNumber
modules/
*.log
Now you can add and commit changes at your own will.
And if you add another remote to your Git repository you can push your configuration to another server.
Alberto actually recommend to add as well (in $JENKINS_HOME):
jenkins own config (config.xml),
the jenkins plugins configs (hudson*.xml) and
the users configs (users/*/config.xml)
To manually manage your configuration with Git, the following .gitignore file may be helpful.
# Miscellaneous Hudson litter
*.log
*.tmp
*.old
*.bak
*.jar
*.json
# Generated Hudson state
/.owner
/secret.key
/queue.xml
/fingerprints/
/shelvedProjects/
/updates/
# Tools that Hudson manages
/tools/
# Extracted plugins
/plugins/*/
# Job state
builds/
workspace/
lastStable
lastSuccessful
nextBuildNumber
See this GitHub Gist and this blog post for more details.
There is a new SCM Sync Configuration plug-in which does exactly what you are looking for.
SCM Sync Configuration Hudson plugin
is aimed at 2 main features :
Keep sync'ed your config.xml (and other ressources) hudson files with a
SCM repository
Track changes (and author) made on every file with commit messages
I haven't actually tried this yet, but it looks promising.
You can find configuration files in Jenkins home folder (e.g. /var/lib/jenkins).
To keep them in VCS, first login as Jenkins (sudo su - jenkins) and create its git credentials:
git config --global user.name "Jenkins"
git config --global user.email "jenkins#example.com"
Then initialize, add and commit the basic files such as:
git init
git add config.xml jobs/ .gitconfig
git commit -m'Adds Jenkins config files' -a
also consider creating .gitignore with the following files to ignore (customize as needed):
# Git untracked files to ignore.
# Cache.
.cache/
# Fingerprint records.
fingerprints/
# Working directories.
workspace/
# Secret files.
secrets/
secret.*
*.enc
*.key
users/
id_rsa
# Plugins.
plugins/
# State files.
*.state
# Job state files.
builds/
lastStable
lastSuccessful
nextBuildNumber
# Updates.
updates/
# Hidden files.
.*
# Except git config files.
!.git*
!.ssh/
# User content.
userContent/
# Log files.
logs/
*.log
# Miscellaneous litter
*.tmp
*.old
*.bak
*.jar
*.json
*.lastExecVersion
Then add it: git add .gitignore.
When done, you can add job config files, e.g.
shopt -s globstar
git add **/config.xml
git commit -m'Added job config files' -a
Finally add and commit any other files if needed, then push it to the remote repository where you want to keep the config files.
When Jenkins files are updated, you need to reload them (Reload Configuration from Disk) or run reload-configuration from Jenkins CLI.
A more accurate .gitignore, inspired by the reply from nepa:
*
!.gitignore
!/jobs/
!/jobs/*/
/jobs/*/*
!/jobs/*/config.xml
!/users/
!/users/*/
/users/*/*
!/users/*/config.xml
!/*.xml
It ignores everything except for .xml config files and .gitignore itself.
(the difference to nepa's .gitignore is that it doesn't "unignore" all top-level directories (!*/) like logs/, cache/, etc.)
The way I prefer is to exclude everything in the Jenkins home folder except the configuration files you really want to be in your VCS. Here is the .gitignore file I use:
*
!.gitignore
!/jobs/*/*.xml
!/*.xml
!/users/*/config.xml
!*/
This ignores everything (*) except (!) .gitignore itself, the jobs/projects, the plugin and other important and user configuration files.
It's also worth considering to include the plugins folder. Annoyingly updated plugins should be included...
Basically this solution makes it easier for future Jenkins/Hudson updates because new files aren't automatically in scope. You just get on the screeen what you really want.
Answer from Mark (https://stackoverflow.com/a/4066654/142207) should work for SVN and Git (although Git configuration did not work for me).
But if you need it to work with Mercurial repo, create a job with following script:
hg remove -A || true
hg add ../../config.xml
hg add ../../*/config.xml
if [ ! -z "`hg status -admrn`" ]; then
hg commit -m "Scheduled commit" -u fill_in_the#blank.com
hg push
fi
I've written a plugin that lets you check your Jenkins instructions into source control. Just add a .jenkins.yml file with the contents:
script:
- make
- make test
and Jenkins will do it:
I checked in hudson entirely, you could use this as a starting point https://github.com/morkeleb/continuous-delivery-with-hudson
There are benefits to keeping entire hudson in git. All config changes are logged and you can test the testup quite easily on one machine and then update the other machine(s) using git pull.
We used this as a boilerplate for our hudson continuous delivery setup at work.
Regards
Morten

How can I get a complete version from TortoiseHg

I am using TortoiseHg as my source control for developing a CMS project written in .NET/C#. I don't know how can I get a whole complete version from my source repository. Is it possible? a version for a specified date.
Thank you.
The command hg update will update your working directory to any prior version of your choice.
The command hg archive will provide you with a zipfile or tarball representing any point in history.
For both commands you can specify your exact revision using the -r argument.