I made a simple python script, that accepts the path text file as input arguments and appends them to each other, and create the single file.
My question is how to address those files in GitHub action without using predefined environmental parameters?
Is there any way the action scripts browse (tree) those files and fed them to the python script?
First, your GitHub Action can define and take a parameter, as see in actions/cat-for-github-actions: that does not use an environment variable.
Second, you can use a path filter in order to trigger your GitHub Action on any txt file change.
But if you want to list files, you need to use the predefined environment variable ${{ github.workspace }}, as in here.
You can then call a python script, which will list/filter files from the checkedout Git repository commit.
Related
I would like to know if it is possible to give inputs to a concourse pipeline from the UI.
I know we can add input details to a git repo and read from the repo, but for every tiny input I need to do a code commit.
For this scenario is Jenkins better than concourse?
I tried searching in the internet to find if it is possible to give inputs to the concourse pipeline, but I did not find a solution.
Manual inputs via UI are not a thing in Concourse.
FWIW: When I need frequent inputs and want to avoid git commits for that purpose, I use an s3 resource versioned file in my pipeline as an input, for example with a send_input.sh script like this:
#!/bin/bash
echo "$1" > /tmp/input.txt
aws s3 cp /tmp/input.txt s3://my-bucket/my-concourse-resource-file.txt
and then
./send_input.sh "this is my input"
then the pipeline picks it up and uses it in my workflow.
I use the video transcoding tools made by Don Melton over on GitHub to compress self filmed videos. Now I would like to automate this task by using a PowerShell script to loop over the contents of a folder as input arguments for the tool and have the output put into a seperate folder. My problem is that the tool is written in a way that it has no option to provide an output location, instead it always places the output files in the directory where it is called in. So when I cd into an "output" directory "next to" the one where my input files are, I then can call
other-transcode ../input/file.mp4
and the output file of the same name as the input file will be placed in the output directory.
Now when I want to use the command in a script, how do I tell PowerShell to run the command as if it was typed manually into a shell that was in the output directory at the moment?
For context, this is my end goal, but I think it is easier to split the complicated question into multiple ones.
I have a yocto recipe file.
However I want to set a value to the variable, by exporting a variable.
For example
I modified I added a variable in oe-init-build-env, (which calls 'svn_util')
export REPO_BRANCH_ROOT=${REPO_BRANCH_ROOT}
The REPO_BRANCH_ROOT variable is set by running a utility 'svn_util', by looking at by current branch.
Now in my recepie.bb file
SRC_URI = "\
svn://${REPO_ROOT_NO_URI}/${REPO_BRANCH_ROOT}/sample module=mymodule;protocol=protocol=http;rev=HEAD \
"
However do_fetch: fails as follows.
Fetcher failure for URL: 'svn://${REPO_ROOT_NO_URI}/${REPO_BRANCH_ROOT}/sample;module=mymodule;protocol=http;rev=HEAD'. Unable to fetch URL from any source.
How do I make .bb file to be aware of my current branch and repository uri? I do not want to hard code it in the .bb file, or local.conf file Because if the .bb file is checked in to a different branch it should work correctly across all branch.
Or to rephrase the question, How a shell exported variable be accessed in the recipe file?
Got my answer, from another post.
https://www.yoctoproject.org/docs/3.1/bitbake-user-manual/bitbake-user-manual.html#var-bb-BB_ENV_EXTRAWHITE
In this case i have to add
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE REPO_SVNREV REPO_ROOT_NO_URI REPO_BRANCH_ROOT"
In the oe-init-build-env
For a number of reasons, it would be really useful if I could create a file from a Jenkins pipeline and put it in my workspace. If I can do this, I could avoid pulling in some repositories where I'm currently pulling them in for just one or two files, keep those files in a maintainable place, and I could also use this to create temporary powershell scripts, working around a limitation of the solution described in https://stackoverflow.com/a/42576572
This might be possible through a Pipeline utility, although https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/ doesn't list any such utility; or it might be possible using a batch script - as long as that can be passed in as a string
You can do something like that:
node (''){
stage('test'){
bat """
echo "something" > file.txt
"""
String out = readFile(file.txt).trim()
print out // prints variable out groovy style
out.useFunction() // allows running functions loaded from the file
bat "type %out%" // batch closure can access the variable
}
}
I would like to capture the output of some variables to be used elsewhere in the job using Jenkins Powershell plugin.
Is this possible?
My goal is to build the latest tag somehow and the powershell script was meant to achieve that, outputing to a text file would not help and environment variables can't be used because the process is seemingly forked unfortunately
Besides EnvInject the another common approach for sharing data between build steps is to store results in files located at job workspace.
The idea is to skip using environment variables altogether and just write/read files.
It seems that the only solution is to combine with EnvInject plugin. You can create a text file with key value pairs from powershell then export them into the build using the EnvInject plugin.
You should make the workspace persistant for this job , then you can save the data you need to file. Other jobs can then access this persistant workspace or use it as their own as long as they are on the same node.
Another option would be to use jenkins built in artifact retention, at the end of the jobs configure page there will be an option to retain files specified by a match (e.g *.xml or last_build_number). These are then given a specific address that can be used by other jobs regardless of which node they are on , the address can be on the master or the node IIRC.
For the simple case of wanting to read a single object from Powershell you can convert it to a JSON string in Powershell and then convert it back in Groovy. Here's an example:
def pathsJSON = powershell(returnStdout: true, script: "ConvertTo-Json ((Get-ChildItem -Path *.txt) | select -Property Name)");
def paths = [];
if(pathsJSON != '') {
paths = readJSON text: pathsJSON
}