Gradle task to write hg revision to file - version-control

Is there a simple way to write to file the mercurial version (or similar external command) in a gradle task:
I'm not yet groovy/gradle conversant, but my current effort looks like this:
task versionInfo(type:Exec){
commandLine 'hg id -i -b -t'
ext.versionfile = new File('bin/$baseName-buildinfo.properties')
doLast {
versionfile.text = 'build.revision=' + standardOutput.toString()
}
}

There are two issues with this build script:
the command line needs to be split; gradle's trying to execute a binary named hg id -i -b t instead of hg with arguments id, -i, -b and t
The standard output needs to be captured; you can make it a ByteOutputStream to be read later
Try this:
task versionInfo(type:Exec){
commandLine 'hg id -i -b -t'.split()
ext.versionfile = new File('bin/$baseName-buildinfo.properties')
standardOutput = new ByteArrayOutputStream()
doLast {
versionfile.text = 'build.revision=' + standardOutput.toString()
}
}

Here I have a little bit different approach, which uses javahg to get revision. And add task "writeRevisionToFile"
I wrote brief post on my blog Gradle - Get Hg Mercurial revision.
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'com.aragost.javahg:javahg:0.4'
}
}
task writeRevisionToFile << {
new File(projectDir, "file-with-revision.txt").text = scmRevision
}
import com.aragost.javahg.Changeset
import com.aragost.javahg.Repository
import com.aragost.javahg.commands.ParentsCommand
String getHgRevision() {
def repo = Repository.open(projectDir)
def parentsCommand = new ParentsCommand(repo)
List<Changeset> changesets = parentsCommand.execute()
if (changesets == null || changesets.size() != 1) {
def message = "Exactly one was parent expected. " + changesets
throw new Exception(message)
}
return changesets[0].node
}
ext {
scmRevision = getHgRevision()
}

Related

Output file not created when running a R command in a Nextflow file?

I am trying to run a nextflow pipeline but the output file is not created.
The main.nf file looks like this:
#!/usr/bin/env nextflow
nextflow.enable.dsl=2
process my_script {
"""
Rscript script.R
"""
}
workflow {
my_script
}
In my nextflow.config I have:
process {
executor = 'k8s'
container = 'rocker/r-ver:4.1.3'
}
The script.R looks like this:
FUN <- readRDS("function.rds");
input = readRDS("input.rds");
output = FUN(
singleCell_data_input = input[[1]], savePath = input[[2]], tmpDirGC = input[[3]]
);
saveRDS(output, "output.rds")
After running nextflow run main.nf the output.rds is not created
Nextflow processes are run independently and isolated from each other from inside the working directory. For your script to be able to find the required input files, these must be localized inside the process working directory. This should be done by defining an input block and declaring the files using the path qualifier, for example:
params.function_rds = './function.rds'
params.input_rds = './input.rds'
process my_script {
input:
path my_function_rds
path my_input_rds
output:
path "output.rds"
"""
#!/usr/bin/env Rscript
FUN <- readRDS("${my_function_rds}");
input = readRDS("${my_input_rds}");
output = FUN(
singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]]
);
saveRDS(output, "output.rds")
"""
}
workflow {
function_rds = file( params.function_rds )
input_rds = file( params.input_rds )
my_script( function_rds, input_rds )
my_script.out.view()
}
In the same way, the script itself would need to be localized inside the process working directory. To avoid specifying an absolute path to your R script (which would not make your workflow portable at all), it's possible to simply embed your code, making sure to specify the Rscript shebang. This works because process scripts are not limited to Bash1.
Another way, would be to make your Rscript executable and move it into a directory called bin in the the root directory of your project repository (i.e. the same directory as your 'main.nf' Nextflow script). Nextflow automatically adds this folder to the $PATH environment variable and your script would become automatically accessible to each of your pipeline processes. For this to work, you'd need some way to pass in the input files as command line arguments. For example:
params.function_rds = './function.rds'
params.input_rds = './input.rds'
process my_script {
input:
path my_function_rds
path my_input_rds
output:
path "output.rds"
"""
script.R "${my_function_rds}" "${my_input_rds}" output.rds
"""
}
workflow {
function_rds = file( params.function_rds )
input_rds = file( params.input_rds )
my_script( function_rds, input_rds )
my_script.out.view()
}
And your R script might look like:
#!/usr/bin/env Rscript
args <- commandArgs(trailingOnly = TRUE)
FUN <- readRDS(args[1]);
input = readRDS(args[2]);
output = FUN(
singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]]
);
saveRDS(output, args[3])

Examples of using SCons with knitr

Are there minimal, or even larger, working examples of using SCons and knitr to generate reports from .Rmd files?
kniting an cleaning_session.Rmd file from the command line (bash shell) to derive an .html file, may be done via:
Rscript -e "library(knitr); knit('cleaning_session.Rmd')".
In this example, Rscript and instructions are fed to a Makefile:
RMDFILE=test
html :
Rscript -e "require(knitr); require(markdown); knit('$(RMDFILE).rmd', '$(RMDFILE).md'); markdownToHTML('$(RMDFILE).md', '$(RMDFILE).html', options=c('use_xhtml', 'base64_images')); browseURL(paste('file://', file.path(getwd(),'$(RMDFILE).html'), sep=''
In this answer https://stackoverflow.com/a/10945832/1172302, there is reportedly a solution using SCons. Yet, I did not test enough to make it work for me. Essentially, it would be awesome to have something like the example presented at https://tex.stackexchange.com/a/26573/8272.
[Updated] One working example is an Sconstruct file:
import os
environment = Environment(ENV=os.environ)
# define a `knitr` builder
builder = Builder(action = '/usr/local/bin/knit $SOURCE -o $TARGET',
src_suffix='Rmd')
# add builders as "Knit", "RMD"
environment.Append( BUILDERS = {'Knit' : builder} )
# define an `rmarkdown::render()` builder
builder = Builder(action = '/usr/bin/Rscript -e "rmarkdown::render(input=\'$SOURCE\', output_file=\'$TARGET\')"',
src_suffix='Rmd')
environment.Append( BUILDERS = {'RMD' : builder} )
# define source (and target files -- currently useless, since not defined above!)
# main cleaning session code
environment.RMD(source='cleaning_session.Rmd', target='cleaning_session.html')
# documentation of the Cleaning Process
environment.Knit(source='Cleaning_Process.Rmd', target='Cleaning_Process.html')
# documentation of data
environment.Knit(source='Code_Book.Rmd', target='Code_Book.html')
The first builder calls the custom script called knit. Which, in turn, takes care of the target file/extension, here being cleaning_session.html. Likely the suffix parameter is not needed altogether, in this very example.
The second builder added is Rscript -e "rmarkdown::render(\'$SOURCE\')"'.
The existence of $TARGETs (as in the example at Command wrapper) ensures SCons won't repeat work if a target file already exists.
The custom script (whose source I can't retrieve currently) is:
#!/usr/bin/env Rscript
local({
p = commandArgs(TRUE)
if (length(p) == 0L || any(c('-h', '--help') %in% p)) {
message('usage: knit input [input2 input3] [-n] [-o output output2 output3]
-h, --help to print help messages
-n, --no-convert do not convert tex to pdf, markdown to html, etc
-o output filename(s) for knit()')
q('no')
}
library(knitr)
o = match('-o', p)
if (is.na(o)) output = NA else {
output = tail(p, length(p) - o)
p = head(p, o - 1L)
}
nc = c('-n', '--no-convert')
knit_fun = if (any(nc %in% p)) {
p = setdiff(p, nc)
knit
} else {
if (length(p) == 0L) stop('no input file provided')
if (grepl('\\.(R|S)(nw|tex)$', p[1])) {
function(x, ...) knit2pdf(x, ..., clean = TRUE)
} else {
if (grepl('\\.R(md|markdown)$', p[1])) knit2html else knit
}
}
mapply(knit_fun, p, output = output, MoreArgs = list(envir = globalenv()))
})
The only thing, now, necessary is to run scons.

How can I export Jira issues to BitBucket

Ive just moved my projects code from java.net to BitBucket. But my jira issue tracking is still hosted on java.net, although BitBucket does have some options for linking to an external issue tracker I don't think I can use it for java.net, not least because I do not have the admin priviledges need to install the DVCS connector.
So I thought an alternative option would be to export and then import the issues into BitBucket issue tracker, is that possible ?
Progress so far
So I tried following the steps in both informative answers using OSX below but I hit a problem - I'm rather confused about what the script would actually be called because in the answers it talks about export.py but no such script exists with that name so I renamed the one I downloaded.
sudo easy_install pip (OSX)
pip install jira
pip install configparser
easy_install -U setuptools
Go to https://bitbucket.org/reece/rcore, select downloads tab, download zip and unzip, and rename to reece ( for some reason git clone https://bitbucket.org/reece/rcore fails with error)
cd reece/rcore
Save script as export.py in rcore subfolder
Replace iteritems with items in import.py
Replace iteritems with types/immutabledict.py
Create .config in rcore folder
Create .config/jira-issues-move-to-bitbucket.conf containing
jira-username=paultaylor
jira-hostname=https://java.net/jira/browse/JAUDIOTAGGER
jira-password=password
Run python export.py --jira-project jaudiotagger
gives
macbook:rcore paul$ python export.py --jira-project jaudiotagger
Traceback (most recent call last):
File "export.py", line 24, in <module>
import configparser
ImportError: No module named configparser
- Run python export.py --jira-project jaudiotagger
I need to run pip insdtall as root so did
sudo pip install configparser
and that worked
but now
python export.py --jira.project jaudiotagger
gives
File "export.py" line 35, in <module?
from jira.client import JIRA
ImportError: No module named jira.client
You can import issues into BitBucket, they just need to be in the appropriate format. Fortunately, Reece Hart has already written a Python script to connect to a Jira instance and export the issues.
To get the script to run I had to install the Jira Python package as well as the latest version of rcore (if you use pip you get an incompatible previous version, so you have to get the source). I also had to replace all instances of iteritems with items in the script and in rcore/types/immutabledict.py to make it work with Python 3. You will also need to fill in the dictionaries (priority_map, person_map, etc) with the values your project uses. Finally, you need a config file to exist with the connection info (see comments at the top of the script).
The basic command line usage is export.py --jira-project <project>
Once you've got the data exported, see the instructions for importing issues to BitBucket
#!/usr/bin/env python
"""extract issues from JIRA and export to a bitbucket archive
See:
https://confluence.atlassian.com/pages/viewpage.action?pageId=330796872
https://confluence.atlassian.com/display/BITBUCKET/Mark+up+comments
https://bitbucket.org/tutorials/markdowndemo/overview
2014-04-12 08:26 Reece Hart <reecehart#gmail.com>
Requires a file ~/.config/jira-issues-move-to-bitbucket.conf
with content like
[default]
jira-username=some.user
jira-hostname=somewhere.jira.com
jira-password=ur$pass
"""
import argparse
import collections
import configparser
import glob
import itertools
import json
import logging
import os
import pprint
import re
import sys
import zipfile
from jira.client import JIRA
from rcore.types.immutabledict import ImmutableDict
priority_map = {
'Critical (P1)': 'critical',
'Major (P2)': 'major',
'Minor (P3)': 'minor',
'Nice (P4)': 'trivial',
}
person_map = {
'reece.hart': 'reece',
# etc
}
issuetype_map = {
'Improvement': 'enhancement',
'New Feature': 'enhancement',
'Bug': 'bug',
'Technical task': 'task',
'Task': 'task',
}
status_map = {
'Closed': 'resolved',
'Duplicate': 'duplicate',
'In Progress': 'open',
'Open': 'new',
'Reopened': 'open',
'Resolved': 'resolved',
}
def parse_args(argv):
def sep_and_flatten(l):
# split comma-sep elements and flatten list
# e.g., ['a','b','c,d'] -> set('a','b','c','d')
return list( itertools.chain.from_iterable(e.split(',') for e in l) )
cf = configparser.ConfigParser()
cf.readfp(open(os.path.expanduser('~/.config/jira-issues-move-to-bitbucket.conf'),'r'))
ap = argparse.ArgumentParser(
description = __doc__
)
ap.add_argument(
'--jira-hostname', '-H',
default = cf.get('default','jira-hostname',fallback=None),
help = 'host name of Jira instances (used for url like https://hostname/, e.g., "instancename.jira.com")',
)
ap.add_argument(
'--jira-username', '-u',
default = cf.get('default','jira-username',fallback=None),
)
ap.add_argument(
'--jira-password', '-p',
default = cf.get('default','jira-password',fallback=None),
)
ap.add_argument(
'--jira-project', '-j',
required = True,
help = 'project key (e.g., JRA)',
)
ap.add_argument(
'--jira-issues', '-i',
action = 'append',
default = [],
help = 'issue id (e.g., JRA-9); multiple and comma-separated okay; default = all in project',
)
ap.add_argument(
'--jira-issues-file', '-I',
help = 'file containing issue ids (e.g., JRA-9)'
)
ap.add_argument(
'--jira-components', '-c',
action = 'append',
default = [],
help = 'components criterion; multiple and comma-separated okay; default = all in project',
)
ap.add_argument(
'--existing', '-e',
action = 'store_true',
default = False,
help = 'read existing archive (from export) and merge new issues'
)
opts = ap.parse_args(argv)
opts.jira_components = sep_and_flatten(opts.jira_components)
opts.jira_issues = sep_and_flatten(opts.jira_issues)
return opts
def link(url,text=None):
return "[{text}]({url})".format(url=url,text=url if text is None else text)
def reformat_to_markdown(desc):
def _indent4(mo):
i = " "
return i + mo.group(1).replace("\n",i)
def _repl_mention(mo):
return "#" + person_map[mo.group(1)]
#desc = desc.replace("\r","")
desc = re.sub("{noformat}(.+?){noformat}",_indent4,desc,flags=re.DOTALL+re.MULTILINE)
desc = re.sub(opts.jira_project+r"-(\d+)",r"issue #\1",desc)
desc = re.sub(r"\[~([^]]+)\]",_repl_mention,desc)
return desc
def fetch_issues(opts,jcl):
jql = [ 'project = ' + opts.jira_project ]
if opts.jira_components:
jql += [ ' OR '.join([ 'component = '+c for c in opts.jira_components ]) ]
if opts.jira_issues:
jql += [ ' OR '.join([ 'issue = '+i for i in opts.jira_issues ]) ]
jql_str = ' AND '.join(["("+q+")" for q in jql])
logging.info('executing query ' + jql_str)
return jcl.search_issues(jql_str,maxResults=500)
def jira_issue_to_bb_issue(opts,jcl,ji):
"""convert a jira issue to a dictionary with values appropriate for
POSTing as a bitbucket issue"""
logger = logging.getLogger(__name__)
content = reformat_to_markdown(ji.fields.description) if ji.fields.description else ''
if ji.fields.assignee is None:
resp = None
else:
resp = person_map[ji.fields.assignee.name]
reporter = person_map[ji.fields.reporter.name]
jiw = jcl.watchers(ji.key)
watchers = [ person_map[u.name] for u in jiw.watchers ] if jiw else []
milestone = None
if ji.fields.fixVersions:
vnames = [ v.name for v in ji.fields.fixVersions ]
milestone = vnames[0]
if len(vnames) > 1:
logger.warn("{ji.key}: bitbucket issues may have only 1 milestone (JIRA fixVersion); using only first ({f}) and ignoring rest ({r})".format(
ji=ji, f=milestone, r=",".join(vnames[1:])))
issue_id = extract_issue_number(ji.key)
bbi = {
'status': status_map[ji.fields.status.name],
'priority': priority_map[ji.fields.priority.name],
'kind': issuetype_map[ji.fields.issuetype.name],
'content_updated_on': ji.fields.created,
'voters': [],
'title': ji.fields.summary,
'reporter': reporter,
'component': None,
'watchers': watchers,
'content': content,
'assignee': resp,
'created_on': ji.fields.created,
'version': None, # ?
'edited_on': None,
'milestone': milestone,
'updated_on': ji.fields.updated,
'id': issue_id,
}
return bbi
def jira_comment_to_bb_comment(opts,jcl,jc):
bbc = {
'content': reformat_to_markdown(jc.body),
'created_on': jc.created,
'id': int(jc.id),
'updated_on': jc.updated,
'user': person_map[jc.author.name],
}
return bbc
def extract_issue_number(jira_issue_key):
return int(jira_issue_key.split('-')[-1])
def jira_key_to_bb_issue_tag(jira_issue_key):
return 'issue #' + str(extract_issue_number(jira_issue_key))
def jira_link_text(jk):
return link("https://invitae.jira.com/browse/"+jk,jk) + " (Invitae access required)"
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
opts = parse_args(sys.argv[1:])
dir_name = opts.jira_project
if opts.jira_components:
dir_name += '-' + ','.join(opts.jira_components)
if opts.jira_issues_file:
issues = [i.strip() for i in open(opts.jira_issues_file,'r')]
logger.info("added {n} issues from {opts.jira_issues_file} to issues list".format(n=len(issues),opts=opts))
opts.jira_issues += issues
opts.dir = os.path.join('/','tmp',dir_name)
opts.att_rel_dir = 'attachments'
opts.att_abs_dir = os.path.join(opts.dir,opts.att_rel_dir)
opts.json_fn = os.path.join(opts.dir,'db-1.0.json')
if not os.path.isdir(opts.att_abs_dir):
os.makedirs(opts.att_abs_dir)
opts.jira_issues = list(set(opts.jira_issues)) # distinctify
jcl = JIRA({'server': 'https://{opts.jira_hostname}/'.format(opts=opts)},
basic_auth=(opts.jira_username,opts.jira_password))
if opts.existing:
issues_db = json.load(open(opts.json_fn,'r'))
existing_ids = [ i['id'] for i in issues_db['issues'] ]
logger.info("read {n} issues from {fn}".format(n=len(existing_ids),fn=opts.json_fn))
else:
issues_db = dict()
issues_db['meta'] = {
'default_milestone': None,
'default_assignee': None,
'default_kind': "bug",
'default_component': None,
'default_version': None,
}
issues_db['attachments'] = []
issues_db['comments'] = []
issues_db['issues'] = []
issues_db['logs'] = []
issues_db['components'] = [ {'name':v.name} for v in jcl.project_components(opts.jira_project) ]
issues_db['milestones'] = [ {'name':v.name} for v in jcl.project_versions(opts.jira_project) ]
issues_db['versions'] = issues_db['milestones']
# bb_issue_map: bb issue # -> bitbucket issue
bb_issue_map = ImmutableDict( (i['id'],i) for i in issues_db['issues'] )
# jk_issue_map: jira key -> bitbucket issue
# contains only items migrated from JIRA (i.e., not preexisting issues with --existing)
jk_issue_map = ImmutableDict()
# issue_links is a dict of dicts of lists, using JIRA keys
# e.g., links['CORE-135']['depends on'] = ['CORE-137']
issue_links = collections.defaultdict(lambda: collections.defaultdict(lambda: []))
issues = fetch_issues(opts,jcl)
logger.info("fetch {n} issues from JIRA".format(n=len(issues)))
for ji in issues:
# Pfft. Need to fetch the issue again due to bug in JIRA.
# See https://bitbucket.org/bspeakmon/jira-python/issue/47/, comment on 2013-10-01 by ssonic
ji = jcl.issue(ji.key,expand="attachments,comments")
# create the issue
bbi = jira_issue_to_bb_issue(opts,jcl,ji)
issues_db['issues'] += [bbi]
bb_issue_map[bbi['id']] = bbi
jk_issue_map[ji.key] = bbi
issue_links[ji.key]['imported from'] = [jira_link_text(ji.key)]
# add comments
for jc in ji.fields.comment.comments:
bbc = jira_comment_to_bb_comment(opts,jcl,jc)
bbc['issue'] = bbi['id']
issues_db['comments'] += [bbc]
# add attachments
for ja in ji.fields.attachment:
att_rel_path = os.path.join(opts.att_rel_dir,ja.id)
att_abs_path = os.path.join(opts.att_abs_dir,ja.id)
if not os.path.exists(att_abs_path):
open(att_abs_path,'w').write(ja.get())
logger.info("Wrote {att_abs_path}".format(att_abs_path=att_abs_path))
bba = {
"path": att_rel_path,
"issue": bbi['id'],
"user": person_map[ja.author.name],
"filename": ja.filename,
}
issues_db['attachments'] += [bba]
# parent-child is task-subtask
if hasattr(ji.fields,'parent'):
issue_links[ji.fields.parent.key]['subtasks'].append(jira_key_to_bb_issue_tag(ji.key))
issue_links[ji.key]['parent task'].append(jira_key_to_bb_issue_tag(ji.fields.parent.key))
# add links
for il in ji.fields.issuelinks:
if hasattr(il,'outwardIssue'):
issue_links[ji.key][il.type.outward].append(jira_key_to_bb_issue_tag(il.outwardIssue.key))
elif hasattr(il,'inwardIssue'):
issue_links[ji.key][il.type.inward].append(jira_key_to_bb_issue_tag(il.inwardIssue.key))
logger.info("migrated issue {ji.key}: {ji.fields.summary} ({components})".format(
ji=ji,components=','.join(c.name for c in ji.fields.components)))
# append links section to content
# this section shows both task-subtask and "issue link" relationships
for src,dstlinks in issue_links.iteritems():
if src not in jk_issue_map:
logger.warn("issue {src}, with issue_links, not in jk_issue_map; skipping".format(src=src))
continue
links_block = "Links\n=====\n"
for desc,dsts in sorted(dstlinks.iteritems()):
links_block += "* **{desc}**: {links} \n".format(desc=desc,links=", ".join(dsts))
if jk_issue_map[src]['content']:
jk_issue_map[src]['content'] += "\n\n" + links_block
else:
jk_issue_map[src]['content'] = links_block
id_counts = collections.Counter(i['id'] for i in issues_db['issues'])
dupes = [ k for k,cnt in id_counts.iteritems() if cnt>1 ]
if dupes:
raise RuntimeError("{n} issue ids appear more than once from existing {opts.json_fn}".format(
n=len(dupes),opts=opts))
json.dump(issues_db,open(opts.json_fn,'w'))
logger.info("wrote {n} issues to {opts.json_fn}".format(n=len(id_counts),opts=opts))
# write zipfile
os.chdir(opts.dir)
with zipfile.ZipFile(opts.dir + '.zip','w') as zf:
for fn in ['db-1.0.json']+glob.glob('attachments/*'):
zf.write(fn)
logger.info("added {fn} to archive".format(fn=fn))
NOTE: I'm writing a new answer because writing this in a comment would be horrible, but most of the credit goes to #Turch's answer.
My steps (in OSX and Debian machines, both worked fine):
apt-get install python-pip (Debian) or sudo easy_install pip (OSX)
pip install jira
pip install configparser
easy_install -U setuptools (not sure if really needed)
Download or clone the source code from https://bitbucket.org/reece/rcore/ in your home folder, for example. Note: don't download using pip, it will get the 0.0.2 version and you need the 0.0.3.
Download the Python script created by Reece, mentioned by #Turch, and place it inside of the rcore folder.
Follow the instructions by #Turch: I also had to replace all instances of iteritems with items in the script and in rcore/types/immutabledict.py to make it work with Python 3. You will also need to fill in the dictionaries (priority_map, person_map, etc) with the values your project uses. Finally, you need a config file to exist with the connection info (see comments at the top of the script). Note: I used hostname like jira.domain.com (no http or https).
(This change did the trick for me) I had to change part of the line 250 from 'https://{opts.jira_hostname}/' to 'http://{opts.jira_hostname}/'
To finish, run the script like #Turch mentioned: The basic command line usage is export.py --jira-project <project>
The file was placed in /tmp/.zip for me.
The file was perfectly accepted in the BitBucket importer today.
Hooray for Reece and Turch! Thanks guys!

Gradle plugin copy file from plugin jar

I'm creating my first gradle plugin. I'm trying to copy a file from the distribution jar into a directory I've created at the project. Although the file exists inside the jar, I can't copy it to the directory.
This is my task code:
import org.gradle.api.DefaultTask;
import org.gradle.api.tasks.TaskAction;
class InitTask extends DefaultTask {
File baseDir;
private void copyEnvironment(File environments) {
String resource = getClass().getResource("/environments/development.properties").getFile();
File input = new File(resource);
File output = new File(environments, "development.properties");
try {
copyFile(input, output);
}
catch (IOException e) {
e.printStackTrace();
}
}
void copyFile(File sourceFile, File destFile) {
destFile << sourceFile.text
}
#TaskAction
void createDirectories() {
logger.info "Creating directory."
File environments = new File(baseDir, "environments");
File scripts = new File(baseDir, "scripts");
File drivers = new File(baseDir, "drivers");
[environments, scripts, drivers].each {
it.mkdirs();
}
copyEnvironment(environments);
logger.info "Directory created at '${baseDir.absolutePath}'."
}
}
And this is the error I'm getting:
:init
java.io.FileNotFoundException: file:/path-to-jar/MyJar.jar!/environments/development.properties (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:120)
at groovy.util.CharsetToolkit.<init>(CharsetToolkit.java:69)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.newReader(DefaultGroovyMethods.java:15706)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.getText(DefaultGroovyMethods.java:14754)
at org.codehaus.groovy.runtime.dgm$352.doMethodInvoke(Unknown Source)
at org.codehaus.groovy.reflection.GeneratedMetaMethod$Proxy.doMethodInvoke(GeneratedMetaMethod.java:70)
at groovy.lang.MetaClassImpl$GetBeanMethodMetaProperty.getProperty(MetaClassImpl.java:3465)
at org.codehaus.groovy.runtime.callsite.GetEffectivePojoPropertySite.getProperty(GetEffectivePojoPropertySite.java:61)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:227)
at br.com.smartcoders.migration.tasks.InitTask.copyFile(InitTask.groovy:29)
Just to emphasize, the development.properties is inside the environments directory inside the MyJar.jar
getClass().getResource() returns a URL. To access that URL, you'll have to read it directly (e.g. with url.text) rather than first converting it to a String/File. Or you can use getClass().getResourceAsStream().text, which is probably more accurate. In both cases you can optionally specify the file encoding.
Kotlin DSL answer!
For cases like this it is good to have extensions:
fun Any.getResource(filename: String): File? {
val input = this::class.java.classLoader.getResourceAsStream(filename) ?: return null
val tempFile = File.createTempFile(
filename.substringBeforeLast('.'),
"." + filename.substringAfterLast('.')
)
tempFile.deleteOnExit()
tempFile.writer().use { output ->
input.bufferedReader().use { input ->
output.write(input.readText())
}
}
return tempFile
}

How do you detect which files were manually merged in mercurial's history?

I'm part of a team newly using mercurial and we've identified that when merges occur there are many more errors in files that are manually merged. Is it possible from the mercurial logs (i.e. after someone has done the merge and pushed the merge changeset to the central repository) to detect which files were manually merged?
Note, that I have no idea if this is foolproof. Also, it requires a copy of my as of yet unfinished Mercurial library for .NET, probably only runs on Windows, and is kinda rough around the edges.
Note: I'm assuming that by "were manually merged", you mean "files Mercurial didn't automatically merge for us"
Such a file could still be merged somewhat or completely automatic by an external tool, but if the above assumption is good enough, read on.
However, what I did was to effectively run through all the merges in a test repository, re-doing the merges and asking Mercurial to use only its internal merge tool, which leaves files unresolved if they cannot be automatically merged by Mercurial, and then report all unresolved files, clean up the merge, and move on to the next merge changeset.
The library you need (only in source code form at the moment, I told you it was unfinished):
Mercurial.Net (my open source project, hosted on CodePlex)
I've attached a zip file at the bottom with everything, test repository, script, and binary copy of library.
The script (I used LINQPad to write and test this, output from a test-repository follows):
void Main()
{
var repo = new Repository(#"c:\temp\repo");
var mergeChangesets = repo.Log(new LogCommand()
.WithAdditionalArgument("-r")
.WithAdditionalArgument("merge()")).Reverse().ToArray();
foreach (var merge in mergeChangesets)
{
Debug.WriteLine("analyzing merge #" + merge.RevisionNumber +
" between revisions #" + merge.LeftParentRevision +
" and #" + merge.RightParentRevision);
// update to left parent
repo.Update(merge.LeftParentHash);
try
{
// perform merge with right parent
var mergeCmd = new MergeCommand();
mergeCmd.WithRevision = merge.RightParentHash;
repo.Execute(mergeCmd);
// get list of unresolved files
var resolveCmd = new ResolveCommand();
repo.Execute(resolveCmd);
var unresolvedFiles = new List<string>();
using (var reader = new StringReader(resolveCmd.RawStandardOutput))
{
string line;
while ((line = reader.ReadLine()) != null)
if (line.StartsWith("U "))
unresolvedFiles.Add(line.Substring(2));
}
// report
if (unresolvedFiles.Count > 0)
{
Debug.WriteLine("merge changeset #" + merge.RevisionNumber +
" between revisions #" + merge.LeftParentRevision +
" and #" + merge.RightParentRevision + " had " +
unresolvedFiles.Count + " unresolved file(s)");
foreach (string filename in unresolvedFiles)
{
Debug.WriteLine(" " + filename);
}
}
}
finally
{
// get repository back to proper state
repo.Update(merge.LeftParentHash, new UpdateCommand().WithClean());
}
}
}
public class MergeCommand : MercurialCommandBase<MergeCommand>
{
public MergeCommand()
: base("merge")
{
}
[NullableArgumentAttribute(NonNullOption = "--rev")]
public RevSpec WithRevision
{
get;
set;
}
public override IEnumerable<string> Arguments
{
get
{
foreach (var arg in base.Arguments)
yield return arg;
yield return "--config";
yield return "ui.merge=internal:merge";
}
}
protected override void ThrowOnUnsuccessfulExecution(int exitCode,
string standardOutput, string standardErrorOutput)
{
if (exitCode != 0 && exitCode != 1)
base.ThrowOnUnsuccessfulExecution(exitCode, standardOutput,
standardErrorOutput);
}
}
public class ResolveCommand : MercurialCommandBase<MergeCommand>
{
public ResolveCommand()
: base("resolve")
{
}
public override IEnumerable<string> Arguments
{
get
{
foreach (var arg in base.Arguments)
yield return arg;
yield return "--list";
}
}
}
The sample output:
analyzing merge #7 between revisions #5 and #6
analyzing merge #10 between revisions #9 and #8
merge changeset #10 between revisions #9 and #8 had 1 unresolved file(s)
test1.txt
Zip-file with everything (LINQPad script, Mercurial.Net assembly, go compile it if you don't trust me, and the test repository I executed it against above):
here: zip-file
BIG CAVEAT: Run this only in a clone of your repository! I won't be held responsible if this corrupts your repository, however unlikely I think that would be.
Tis is the shell script variant of Lasses answer:
#!/bin/bash
test "$1" = "--destroy-my-working-copy" || exit
hg log -r 'merge()' --template '{parents} {rev}\n' | sed -e 's/[0-9]*://g' | while read p1 p2 commit
do
echo "-------------------------"
echo examine $commit
LC_ALL=C hg up -C -r $p1 >/dev/null
LC_ALL=C hg merge -r 26 --config ui.merge=internal:merge 2>&1 | grep failed
done
hg up -C -r tip > /dev/null
Example output:
> mergetest.sh --destroy-my-working-copy
-------------------------
examine 7
-------------------------
examine 11
-------------------------
examine 23
-------------------------
examine 31
-------------------------
examine 37
merging test.py failed!