Jenkins Pipeline - Create file in workspace (Windows Slave) - powershell

For a number of reasons, it would be really useful if I could create a file from a Jenkins pipeline and put it in my workspace. If I can do this, I could avoid pulling in some repositories where I'm currently pulling them in for just one or two files, keep those files in a maintainable place, and I could also use this to create temporary powershell scripts, working around a limitation of the solution described in https://stackoverflow.com/a/42576572
This might be possible through a Pipeline utility, although https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/ doesn't list any such utility; or it might be possible using a batch script - as long as that can be passed in as a string

You can do something like that:
node (''){
stage('test'){
bat """
echo "something" > file.txt
"""
String out = readFile(file.txt).trim()
print out // prints variable out groovy style
out.useFunction() // allows running functions loaded from the file
bat "type %out%" // batch closure can access the variable
}
}

Related

Powershell call a script from another scipt as if in a different folder

I use the video transcoding tools made by Don Melton over on GitHub to compress self filmed videos. Now I would like to automate this task by using a PowerShell script to loop over the contents of a folder as input arguments for the tool and have the output put into a seperate folder. My problem is that the tool is written in a way that it has no option to provide an output location, instead it always places the output files in the directory where it is called in. So when I cd into an "output" directory "next to" the one where my input files are, I then can call
other-transcode ../input/file.mp4
and the output file of the same name as the input file will be placed in the output directory.
Now when I want to use the command in a script, how do I tell PowerShell to run the command as if it was typed manually into a shell that was in the output directory at the moment?
For context, this is my end goal, but I think it is easier to split the complicated question into multiple ones.

how to address files in GitHub action without using environmental parameters

I made a simple python script, that accepts the path text file as input arguments and appends them to each other, and create the single file.
My question is how to address those files in GitHub action without using predefined environmental parameters?
Is there any way the action scripts browse (tree) those files and fed them to the python script?
First, your GitHub Action can define and take a parameter, as see in actions/cat-for-github-actions: that does not use an environment variable.
Second, you can use a path filter in order to trigger your GitHub Action on any txt file change.
But if you want to list files, you need to use the predefined environment variable ${{ github.workspace }}, as in here.
You can then call a python script, which will list/filter files from the checkedout Git repository commit.

running job with different file without reloading the file

I created a job that could be reusable for new files. The entire activities in the job, the maps and everything else will remain the same except for the file name. I already tried it once but it seems that i need to re "load" the file and remap everything again. It's inefficient. Is there any way for me to pass different file in a job without remaping, reconfiguring and reloading anything?
You have multiple options for allowing a DataStage parallel job to use a different filename for input on each job run:
When using either Sequential File stage or File Connector stage, in stead of typing the actual filename, you can input the name of a job parameter which has been defined on the Parameters tab of the job properties dialog. For example, if you define string parameter myFile, then in the filename field of input stage you would enter #myFile# and at job run-time that would be replaced by whatever is the current value of the myFile parameter. If you run job manually from Director/Designer clients, you will have job run dialog where you can specify a value for job parameters. If you start job via dsjob command, there are options to pass in job parameters on command line. You also have option to use parameterset files that you can modify prior to job run.
Another option would be to use a file location and pattern instead of a specific file name. Both Sequential File stage and File Connector stage let you specify a pattern, for example: /data/my_input_files/*.txt
Then, each time you run job it will input any files at that location matching the above pattern, so it can process multiple files. However, to prevent re-processing files from prior job runs, you will want to clean up any files at that location after job completes. Then when you have new files to process just put them in that directory and re-run the job.
In case if all the files contains a similar data structure, you need to implement one parallel job and if you have a similar pattern of file name for all file names Such as 1234ab.xls, 1234vd.xls, 1234gd.xls, ... you could pass the file name as 1234??.xls In the sequential job file name parameter (Use this as file name in parallel job) which contains the above parallel job to be executed.

Jenkins Powershell Output

I would like to capture the output of some variables to be used elsewhere in the job using Jenkins Powershell plugin.
Is this possible?
My goal is to build the latest tag somehow and the powershell script was meant to achieve that, outputing to a text file would not help and environment variables can't be used because the process is seemingly forked unfortunately
Besides EnvInject the another common approach for sharing data between build steps is to store results in files located at job workspace.
The idea is to skip using environment variables altogether and just write/read files.
It seems that the only solution is to combine with EnvInject plugin. You can create a text file with key value pairs from powershell then export them into the build using the EnvInject plugin.
You should make the workspace persistant for this job , then you can save the data you need to file. Other jobs can then access this persistant workspace or use it as their own as long as they are on the same node.
Another option would be to use jenkins built in artifact retention, at the end of the jobs configure page there will be an option to retain files specified by a match (e.g *.xml or last_build_number). These are then given a specific address that can be used by other jobs regardless of which node they are on , the address can be on the master or the node IIRC.
For the simple case of wanting to read a single object from Powershell you can convert it to a JSON string in Powershell and then convert it back in Groovy. Here's an example:
def pathsJSON = powershell(returnStdout: true, script: "ConvertTo-Json ((Get-ChildItem -Path *.txt) | select -Property Name)");
def paths = [];
if(pathsJSON != '') {
paths = readJSON text: pathsJSON
}

Log4Perl: How do I change the logger file used from running code? (After a fork)

I have an ETL process set up in perl to process a number of files, and load them to a database.
Recently, for performance reasons I set up the code to be multi-threaded, through use of a fork() call and a call to system("perl someOtherPerlProcess.pl $arg1 $arg2").
I end up with about 12 instances of someOtherPerlProcess.pl running with different arguments, and these processes each work through one directories worth of files (corresponding to a single table in our database).
The applications main functions work, but I am having issues with figuring out how to configure my logging.
Ideally, I would like to have all the someOtherPerlProcess.pl share the same $log_config value to initialize their loggers, but have each of those create a log file in the directory they are working on.
I haven't been able to figure out how to do that. I also noticed that in the directory I am calling these perl scripts from I see several files (ARRAY(0x260eec), ARRAY(0x313f8), etc) that contain all my logging messages!
Is there a simple way to change the log4perl.appender.A1.filename value from running code?
Or to otherwise dynamically configure the file name we use, but use all other values from a config file?
I came up with a less than ideal solution for this, which is to configure my logger from someOtherPerlProcess.pl directly.
my $FORKED_LOG_CONF = "log4perl.appender.A1.filename=$directory_to_load/log.txt
log4perl.rootLogger=WARN, A1
log4perl.appender.A1=Log::Log4perl::Appender::File
log4perl.appender.A1.mode=append
log4perl.appender.A1.autoflush=1
log4perl.appender.A1.layout=PatternLayout
log4perl.appender.A1.layout.ConversionPattern=[%p] %d{yyyy-MM-dd HH:mm:ss}: %m%n";
#Logger start up
Log::Log4perl::init( \$FORKED_LOG_CONF);
my $logger = get_logger();
The $directory_to_load is the process specific portion of the logger, which works in the context of the perl process that is running and has a (local) value for that variable, but that method will fail if used in an external config file.
I would be happy to hear of any alternative solutions.
In your config file:
log4perl.appender.A1.filename=__LOGFILE__
In your script:
use File::Slurp;
my $log_cfg = read_file( $log_cfgfile );
my $logfile = "$directory_to_load/log.txt";
$log_cfg =~ s/__LOGFILE__/$logfile/;
Log::Log4perl::init( \$log_cfg );