Jenkinsfile - how to access other github files? - github

I'm performing an api call in my jenkinsfile that requires specifying a path to file 'A'. Assuming file A is located on the same repo, I am not sure how to refer to file A when running the jenkinsfile.
I feel like this has been done before, but I can't find any resource. Any help is appreciated.

You don't say whether you are using a scripted or declaritive Jenkinsfile, as the details differ. However the principle is the same as far as I am concerned. Basically to do anything with a file you will need to be within a node clause - essentially the controller opens a session on one of the agents and does actions there. You need to checkout your repo on that node:
The scripted Jenkinsfile would look something like (assuming you are not bothered about which node you are running on):
node("") {
checkout scm // "scm" equates to the configuration that the job was run with
// the whole repo will be now available
}

Related

Where is a file created via Terraform code stored in Terraform Cloud?

I've been using Terraform for some time but I'm new to Terraform Cloud. I have a piece of code that if you run it locally it will create a .tf file under a folder that I tell him but if I run it with Terraform CLI on Terraform cloud this won't happen. I'll show it to you so it will be more clear for everyone.
resource "genesyscloud_tf_export" "export" {
directory = "../Folder/"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
So basically when I launch this code with terraform apply in local, it creates a .tf file with everything I need. Where? It goes up one folder and under the folder "Folder" it will store this file.
But when I execute the same code on Terraform Cloud obviously this won't happen. Does any of you have any workaround with this kind of troubles? How can I manage to store this file for example in a github repo when executing github actions? Thanks beforehand
The Terraform Cloud remote execution environment has an ephemeral filesystem that is discarded after a run is complete. Any files you instruct Terraform to create there during the run will therefore be lost after the run is complete.
If you want to make use of this information after the run is complete then you will need to arrange to either store it somewhere else (using additional resources that will write the data to somewhere like Amazon S3) or export the relevant information as root module output values so you can access it via Terraform Cloud's API or UI.
I'm not familiar with genesyscloud_tf_export, but from its documentation it sounds like it will create either one or two files in the given directory:
genesyscloud.tf or genesyscloud.tf.json, depending on whether you set export_as_hcl. (You did, so I assume it'll generate genesyscloud.tf.
terraform.tfstate if you set include_state_file. (You didn't, so I assume that file isn't important in your case.
Based on that, I think you could use the hashicorp/local provider's local_file data source to read the generated file into memory once the MyPureCloud/genesyscloud provider has created it, like this:
resource "genesyscloud_tf_export" "export" {
directory = "../Folder"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
data "local_file" "export_config" {
filename = "${genesyscloud_tf_export.export.directory}/genesyscloud.tf"
}
You can then refer to data.local_file.export_config.content to obtain the content of the file elsewhere in your module and declare that it should be written into some other location that will persist after your run is complete.
This genesyscloud_tf_export resource type seems unusual in that it modifies data on local disk and so its result presumably can't survive from one run to the next in Terraform Cloud. There might therefore be some problems on the next run if Terraform thinks that genesyscloud_tf_export.export.directory still exists but the files on disk don't, but hopefully the developers of this provider have accounted for that somehow in the provider logic.

How to make Snakemake recognize Globus remote files using Globus CLI?

I am working in a high performance computing grid environment, where large-scale data transfers are done via Globus. I would like to use Snakemake to pull data from a Globus path, process the data, and then push the processed data to a different Globus path. Globus has a command-line interface.
Pulling the data is no problem, for I'd just create a rule that would run globus transfer to create the requisite local file. But for pushing the data back to Globus, I think I'll need a rule that can "see" that the file is missing at the remote location, and then work backwards to determine what needs to happen to create the file.
I could create local "proxy" files that represent the remote files. For example I could make a rule for creating 'processed_data_1234.tar.gz' output files in a directory. These files would just be created using touch (thus empty), and the same rule will run globus transfer to push the files remotely. But then there's the overhead of making sure that the proxy files don't get out of sync with the real Globus-hosted files.
Is there a more elegant way to do this akin to the Remote File capability? Is it difficult to add a Globus CLI support for Snakemake? Thanks in advance for any advice!
Would it help to create a utility function that would generate a list of all desired files and compare it against the list of files available on globus? Something like this (pseudocode):
def return_needed_files():
list_needed_files = [] # either hard-coded or specified with some logic
list_available = [] # as appropriate, e.g. using globus ls
return [i for i in list_needed_files if i not in list_available]
# include all the needed files in the all rule
rule all:
input: return_needed_files

Is there a way in Terraform Enterprise to read the payload from VCS?

I have configured a webhook between github and terraform enterprise correctly, so each time I push a commit, the terraform module gets executed. Why I want to achieve is to use part of the branch name where the push was made and pass it as a variable in the terraform module.
I have read that the value of a variable can be a HCL code, but I am unable to find the correct object to access the payload (or at least, the branch name), so at this moment I think it is not possible to get that value directly from the workspace configuration.
if you get a workaround for this, it may also work from me.
At this point the only idea I get is to call the terraform we hook using an API Call
Thanks in advance
Ok, after several try and error I found out that it is not possible to get any information in the terraform module if you are using the VCS mode. So, in order to be able to get the branch, I got these options:
Use several workspaces
You can configure a workspace for each branch, so you may create a variable a select that branch in each workspace. The problem is you will be repeating yourself with this option
Use Terraform CLI and a GitHub action
I used these fine tutorial from Hashicorp for creating a Github action that uses Terraform Cloud. It gets you done the 99% of the job. For passing a varible you must be aware that there are two methods, using a file or using an enviromental variable (check that information on the Hashicorp site here). So using a:
terraform apply -var="branch=value"
won't work. In my case I used the tfvars approach, so in my Github Action I put this snippet:
- name: Setup Terraform variables
id: vars
run: |-
cat > terraform.auto.tfvars <<EOF
branch = "${GITHUB_REF#refs/*/}"
EOF
I defined a variable within terraform called branch, I was able to get and work with this value

How to pass all global credentials to Jenkins pipeline

This is my first question posted on stackoverflow, hence in case I did something incorrectly pleaselet me know.
Description
I am currently working on translation of freestyle projects to declarative pipelines in Jenkins (jenkinsfiles kept in Git repo). The original freestyle job was triggering PowerShell script which needed access to Global name/password pairs defined in Mask Passwords plugin section in Configure System. Solution to this problem was an additional tick in project itself (unfortunately I am not allowed adding screenshots to posts yet, hence editor uploaded screen to imgur and pasted link - please see Screenshot 1):
Screenshot 1
Therefore I started looking for possible implementation of such solution to jenkinsfile, however wothout luck.
My problem
When the script is triggered from the pipeline, it errors out stating that it cannot find relevant passwords (powershell refers to those credentials as to environment variables). This works fine when ran from freestyle project.
Which I reckon is caused by pipeline not being able to reach out to previously mentioned credentials.
What I tried
Wrapping the step into below block:
wrap([$class: 'MaskPasswordsBuildWrapper']) {
bat(batch file launching ps script)
}
Then the above block containing relevant step wrapping into
script {
wrap(...)
}
But none of them worked.
I have taken a look at other plugins like Credentials Binding Plugin or Credentials Plugin but those allow to bind/pass one credential per step, and I need to pass all credentials specified in Jenkins (I am open to move saved credentials to any other location within Jenkins).
I have looked at adding environment variable:
credentials('Credentials-ID')
But the problem is the same as with mentioned plugins.
By any chance, have anyone came across similar situation and know what can be done in order to allow pipeline to access/pass to pipeline all the credentials specified in Jenkins instead of binding/passing them one a time?
All tips are very welcome!
You can do this and the env variable will then be available throughout your job. You could define multiple env variable too.
environment {
// Use credentials() to hide the environment variable's output
MY_PERSONAL_TOKEN = credentials('Credentials-ID')
}
stages {
stage('Test Stage') {
steps {
script {
// do what you need to
}
}
}
}

GitHub Actions: Are there security concerns using an external action in a workflow job?

I have a workflow that FTPs files by using an external action from someuser:
- name: ftp deploy
uses: someuser/ftp-action#master
with:
config: ${{ secrets.FTP_CONFIG }}
Is this a security concern? For example could someuser change ftp-action#master to access my secrets.FTP_CONFIG? Should I copy/paste their action into my workflow instead?
If you use ftp-action#master then every time your workflow runs it will fetch the master branch of the action and build it. So yes, I believe it would be possible for the owner to change the code to capture secrets and send them to an external server under their control.
What you can do to avoid this is use a specific version of the action and review their code. You can use a commit hash to refer to the exact version you want, such as ftp-action#efa82c9e876708f2fedf821563680e2058330de3. You could use a tag if it has release tags. e.g. ftp-action#v1.0.0
Although, this is maybe not as secure because tags can be changed.
Alternatively, and probably the most secure, is to fork the action repository and reference your own copy of it. my-fork/ftp-action#master.
The GitHub help page does mention:
Anyone with write access to a repository can read and use secrets.
If someuser does not have write access to the repository, there should be no security issue.
As commented below, you should specify the exact commit of the workflow you are using, in order to make sure it does not change its behavior without your knowledge.