Cake Task output log to file - cakebuild

I have a set of Tasks inside a build.cake file and I would like to capture the log output from the console into a log file. I know it's possible to use the OnError() function to output errors to file but I would like to output everything to a log file, not just errors.
Below is an example of the build.cake file.
#load "SomeTask.cake"
#load "SomeOtherTask.cake"
var target = Argument("target", "Default");
var someTask = Task("SomeTask")
.Does(() =>
{
SomeMethodInsideSomeTask();
});
var someOtherTask = Task("SomeOtherTask")
.Does(() =>
{
SomeOtherMethodInsideSomeOtherTask();
});
Task("Default")
.IsDependentOn(someTask)
.IsDependentOn(someOtherTask);
RunTarget(target);
N.B. The Tasks are not running any sort of MSBuild commands so it's not possible to use MSBuildFileLogger.

How about pipe the stdout to a file i.e.
./build.ps1 > log.txt

Have you heard about tee ?
It reads standard input and writes it to both standard output and one or more files

Related

Output file not created when running a R command in a Nextflow file?

I am trying to run a nextflow pipeline but the output file is not created.
The main.nf file looks like this:
#!/usr/bin/env nextflow
nextflow.enable.dsl=2
process my_script {
"""
Rscript script.R
"""
}
workflow {
my_script
}
In my nextflow.config I have:
process {
executor = 'k8s'
container = 'rocker/r-ver:4.1.3'
}
The script.R looks like this:
FUN <- readRDS("function.rds");
input = readRDS("input.rds");
output = FUN(
singleCell_data_input = input[[1]], savePath = input[[2]], tmpDirGC = input[[3]]
);
saveRDS(output, "output.rds")
After running nextflow run main.nf the output.rds is not created
Nextflow processes are run independently and isolated from each other from inside the working directory. For your script to be able to find the required input files, these must be localized inside the process working directory. This should be done by defining an input block and declaring the files using the path qualifier, for example:
params.function_rds = './function.rds'
params.input_rds = './input.rds'
process my_script {
input:
path my_function_rds
path my_input_rds
output:
path "output.rds"
"""
#!/usr/bin/env Rscript
FUN <- readRDS("${my_function_rds}");
input = readRDS("${my_input_rds}");
output = FUN(
singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]]
);
saveRDS(output, "output.rds")
"""
}
workflow {
function_rds = file( params.function_rds )
input_rds = file( params.input_rds )
my_script( function_rds, input_rds )
my_script.out.view()
}
In the same way, the script itself would need to be localized inside the process working directory. To avoid specifying an absolute path to your R script (which would not make your workflow portable at all), it's possible to simply embed your code, making sure to specify the Rscript shebang. This works because process scripts are not limited to Bash1.
Another way, would be to make your Rscript executable and move it into a directory called bin in the the root directory of your project repository (i.e. the same directory as your 'main.nf' Nextflow script). Nextflow automatically adds this folder to the $PATH environment variable and your script would become automatically accessible to each of your pipeline processes. For this to work, you'd need some way to pass in the input files as command line arguments. For example:
params.function_rds = './function.rds'
params.input_rds = './input.rds'
process my_script {
input:
path my_function_rds
path my_input_rds
output:
path "output.rds"
"""
script.R "${my_function_rds}" "${my_input_rds}" output.rds
"""
}
workflow {
function_rds = file( params.function_rds )
input_rds = file( params.input_rds )
my_script( function_rds, input_rds )
my_script.out.view()
}
And your R script might look like:
#!/usr/bin/env Rscript
args <- commandArgs(trailingOnly = TRUE)
FUN <- readRDS(args[1]);
input = readRDS(args[2]);
output = FUN(
singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]]
);
saveRDS(output, args[3])

Powershell script not working when included in Jenkins Pipeline Script

What I am trying to achieve with Powershell is as follows:
Increment the Build Number in the AssemblyInfo.cs file on the Build Server. My Script looks like below right now after over a 100 iterations of different variations I am still unable to get it to work. The script works well in the Powershell console but when included into the Jenkins Pipeline Script I get various errors that are proving hard to fix...
def getVersion (file) {
def result = powershell(script:"""Get-Content '${file}' |
Select-String '[0-9]+\\.[0-9]+\\.[0-9]+\\.' |
foreach-object{$_.Matches.Value}.${BUILD_NUMBER}""", returnStdout: true)
echo result
return result
}
...
powershell "(Get-Content ${files[0].path}).replace('[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+',
${getVersion(files[0].path)})) } | Set-Content ${files[0].path}"
...
How about a groovy approach (with Jenkins keywords) instead of PowerShell:
def updtaeAssemblyVersion() {
def files = findFiles(glob: '**/AssemblyInfo.cs')
files.each {
def content = readFile file: it.path
def modifedContent = content.repalceAll(/([0-9]+\\.[0-9]+\\.[0-9]+\\.)([0-9]+)/,"\$1${BUILD_NUMBER}")
writeFile file: it.path, text: modifedContent
}
}
It will read all relevant files and replace only the build section of the version for every occurrence that matches the version regex.

Setting output variable using newman / postman is getting cut off

I have an output variable siteToDeploy and siteToStop. I am using postman to run a test script against the IIS Administration API. In the test portion of one of the requests I am trying to set the azure devops output variable. Its sort of working, but the variable value is getting cut off for some reason.
Here is the test script in postman:
console.log(pm.globals.get("siteName"))
var response = pm.response.json();
var startedSite = _.find(response.websites, function(o) { return o.name.indexOf(pm.globals.get("siteName")) > -1 && pm.globals.get("siteName") && o.status == 'started'});
var stoppedSite = _.find(response.websites, function(o) { return o.name.indexOf(pm.globals.get("siteName")) > -1 && o.status == 'stopped'});
if(stoppedSite && startedSite){
console.log('sites found');
console.log(stoppedSite.id)
console.log('##vso[task.setvariable variable=siteToDeploy;]' + stoppedSite.id);
console.log('##vso[task.setvariable variable=siteToStop;]' + startedSite.id);
}
Here is the output form Newman:
Here is the output from a command line task echoing the $(siteToDeploy) variable. It's getting set, but not the entire value.
I've tried escaping it, but that had no effect. I also created a static command line echo where the variable is set and that worked fine. So I am not sure if it is a Newman issue or Azure having trouble picking up the varaible.
The issue turned our to be how Azure is trying to parse the Newman console log output. I had to add an extra Powershell task to replace the ' coming back from the Newman output.
This is what is looks like:
##This task is only here because of how Newman is writing out the console.log
Param(
[string]$_siteToDeploy = (("$(siteToDeploy)") -replace "'",""),
[string]$_siteToStop = (("$(siteToStop)") -replace "'","")
)
Write-Host ("##vso[task.setvariable variable=siteToDeploy;]{0}" -f ($_siteToDeploy))
Write-Host ("##vso[task.setvariable variable=siteToStop;]{0}" -f ($_siteToStop))

how to read aparticular line from log file using logstash

I have to read 3 different lines from log files based on some text and then output the fields in a csv file.
sample log data:-
20110607 095826 [.] !! Begin test. Script filename/text.txt
20110607 095826 [.] Full path: filename/test/text.txt
20110607 095828 [.] FAILED: Test Failed()..
i have to read file name after !!Begin test. Script. this is my conf file
filter{
grok
{
match => {"message" => "%{BASE10NUM:Date}%{SPACE:pat}{BASE10NUM:Number}%
{SPACE:pat}[.]%{SPACE:pat}%{SPACE:pat}!! Begin test. Script%
{SPACE:pat}%{GREEDYDATA:file}"
}
overwrite => ["message"]
}
if "_grokparserfailure" in [tags]
{
drop{}
}
}
but its not giving me single record, its parsing full log file in json format no parsed field.

Examples of using SCons with knitr

Are there minimal, or even larger, working examples of using SCons and knitr to generate reports from .Rmd files?
kniting an cleaning_session.Rmd file from the command line (bash shell) to derive an .html file, may be done via:
Rscript -e "library(knitr); knit('cleaning_session.Rmd')".
In this example, Rscript and instructions are fed to a Makefile:
RMDFILE=test
html :
Rscript -e "require(knitr); require(markdown); knit('$(RMDFILE).rmd', '$(RMDFILE).md'); markdownToHTML('$(RMDFILE).md', '$(RMDFILE).html', options=c('use_xhtml', 'base64_images')); browseURL(paste('file://', file.path(getwd(),'$(RMDFILE).html'), sep=''
In this answer https://stackoverflow.com/a/10945832/1172302, there is reportedly a solution using SCons. Yet, I did not test enough to make it work for me. Essentially, it would be awesome to have something like the example presented at https://tex.stackexchange.com/a/26573/8272.
[Updated] One working example is an Sconstruct file:
import os
environment = Environment(ENV=os.environ)
# define a `knitr` builder
builder = Builder(action = '/usr/local/bin/knit $SOURCE -o $TARGET',
src_suffix='Rmd')
# add builders as "Knit", "RMD"
environment.Append( BUILDERS = {'Knit' : builder} )
# define an `rmarkdown::render()` builder
builder = Builder(action = '/usr/bin/Rscript -e "rmarkdown::render(input=\'$SOURCE\', output_file=\'$TARGET\')"',
src_suffix='Rmd')
environment.Append( BUILDERS = {'RMD' : builder} )
# define source (and target files -- currently useless, since not defined above!)
# main cleaning session code
environment.RMD(source='cleaning_session.Rmd', target='cleaning_session.html')
# documentation of the Cleaning Process
environment.Knit(source='Cleaning_Process.Rmd', target='Cleaning_Process.html')
# documentation of data
environment.Knit(source='Code_Book.Rmd', target='Code_Book.html')
The first builder calls the custom script called knit. Which, in turn, takes care of the target file/extension, here being cleaning_session.html. Likely the suffix parameter is not needed altogether, in this very example.
The second builder added is Rscript -e "rmarkdown::render(\'$SOURCE\')"'.
The existence of $TARGETs (as in the example at Command wrapper) ensures SCons won't repeat work if a target file already exists.
The custom script (whose source I can't retrieve currently) is:
#!/usr/bin/env Rscript
local({
p = commandArgs(TRUE)
if (length(p) == 0L || any(c('-h', '--help') %in% p)) {
message('usage: knit input [input2 input3] [-n] [-o output output2 output3]
-h, --help to print help messages
-n, --no-convert do not convert tex to pdf, markdown to html, etc
-o output filename(s) for knit()')
q('no')
}
library(knitr)
o = match('-o', p)
if (is.na(o)) output = NA else {
output = tail(p, length(p) - o)
p = head(p, o - 1L)
}
nc = c('-n', '--no-convert')
knit_fun = if (any(nc %in% p)) {
p = setdiff(p, nc)
knit
} else {
if (length(p) == 0L) stop('no input file provided')
if (grepl('\\.(R|S)(nw|tex)$', p[1])) {
function(x, ...) knit2pdf(x, ..., clean = TRUE)
} else {
if (grepl('\\.R(md|markdown)$', p[1])) knit2html else knit
}
}
mapply(knit_fun, p, output = output, MoreArgs = list(envir = globalenv()))
})
The only thing, now, necessary is to run scons.