How to pass Dependabot OPTIONS properties to dependabot-script in Azure DevOps Pipeline - azure-devops

After following guides like this one I am able to successfully run dependabot against my Azure DevOps repo and it auto creates PRs. The issue is I have some customizations I need to make such as ignoring specific packages as the dependabot documentation says can be done here are not working.
Not sure if it is the way I am composing the options object or something else, but no values seem to be honored.
Here is what my Azure DevOps Pipeline looks like:
trigger:
- main
jobs:
- job: dependabot
displayName: Dependabot Execution
pool:
vmImage: 'ubuntu-latest'
variables:
- name: DIRECTORY_PATH
value: /MyApp/
- name: PACKAGE_MANAGER
value: nuget
- name: PROJECT_PATH
value: someDomain/someProject/_git/my-app
- name: OPTIONS
value: |
{"ignore":[{"dependency-name":"NLog*"}]}
# {"ignore_conditions":[{"dependency-name":"NLog*"}]} # also tried and did not work
steps:
- script: git clone https://github.com/dependabot/dependabot-script.git
displayName: Clone Dependabot config repo
- script: |
cd dependabot-script
docker build -t "dependabot/dependabot-script" -f Dockerfile .
displayName: Build Dependabot Image
- script: |
docker run --rm -e AZURE_ACCESS_TOKEN='$(PAT)' \
-e GUTHUB_ACCESS_TOKEN='$(GHPAT)' \
-e PACKAGE_MANAGER='$(PACKAGE_MANAGER)' \
-e PROJECT_PATH='$(PROJECT_PATH)' \
-e DIRECTORY_PATH='$(DIRECTORY_PATH)' \
-e OPTIONS='$(OPTIONS)' \
dependabot/dependabot-script
displayName: Run Dependabot
And here is the output when the pipeline runs:
Running with options: {:ignore=>[{:"dependency-name"=>"NLog*"}]}
Fetching nuget dependency files for someDomain/someProject/_git/my-app
Parsing dependencies information
- Updating NLog (from 5.1.0)… submitted
- Updating System.Data.SqlClient (from 4.8.4)… submitted
Done
Finishing: Run Dependabot
As you can see, 2 PRs are created, which is great, except the NLog one should have been ignored/skipped. I have also tried other options such as commit-message prefix and it did not take either.
Any help is appreciated!

Related

Checkov scan particular folder or PR custom branch files

Trying to run Checkov (for IaC validation) via Azure DevOps YAML pipelines, for ARM template files stored in Azure DevOps version control. The code below:
trigger: none
pool:
vmImage: ubuntu-latest
stages:
- stage: 'runCheckov'
displayName: 'Checkov - Scan ARM files'
jobs:
- job: 'RunCheckov'
displayName: 'Checkov solution'
steps:
- bash: |
docker pull bridgecrew/checkov
workingDirectory: $(System.DefaultWorkingDirectory)
displayName: 'Pull bridgecrew/checkov image'
- bash: |
docker run \
--volume $(pwd):/scripts bridgecrew/checkov \
--directory /scripts \
--output junitxml \
--soft-fail > $(pwd)/CheckovReport.xml
workingDirectory: $(System.DefaultWorkingDirectory)
displayName: 'Run checkov'
- task: PublishTestResults#2
inputs:
testRunTitle: 'Checkov run results'
failTaskOnFailedTests: false
testResultsFormat: 'JUnit'
testResultsFiles: 'CheckovReport.xml'
searchFolder: '$(System.DefaultWorkingDirectory)'
mergeTestResults: false
publishRunAttachments: true
displayName: 'Publish Test results'
The problem - how to change the path/folder of ARM templates to scan. Now it scans all ARM templates found under my whole repo1, regardless what directory value I set.
Also, how to scan PR files committed to custom branch during PR review, so it would trigger the build but the build would scan only those files in the custom branch. I know how to set to trigger build via DevOps repository settings, but again, how to assure build pipeline uses/scan particular PR commit files, not whole repo1 (and master branch).
I recommend you use the Docker image bridgecrew/checkov to set up a container job to run the Checkov scan. The container job will run all the tasks of the job into the Docker container started from this image.
In the container job, you can check out the source repository into the container, then use a script task (such as Bash task) to run the related Checkov CLI to do the files scan. On the script task, you can use the 'workingDirectory' option to specify the path/folder where the command lines run in. Normally, the command lines will only act on files which are in the specified directory and its subdirectories.
If you want to only scan the files in a specific branch in the job, you can clone/checkout the specific branch to the working directory of the job in the container, then like as above mentioned, use the related Checkov CLI to scan files under the specified directory.
[UPDATE]
In the pipeline job, you can try to call the Azure DevOps REST API "Commits - Get Changes" to get all the changed files and folders for the particular commit.
Then use the Checkov CLI with the parameter --directory (-d) or --file (-f) to scan the specified file or folder.

How to execute a a remote script in a reusable github workflow

I have this workflow in a repo called terraform-do-database and I'm trying to use a reusable workflow coming from the public repo foo/git-workflows/.github/workflows/tag_validation.yaml#master
name: Tag Validation
on:
pull_request:
branches: [master]
push:
branches:
- '*' # matches every branch that doesn't contain a '/'
- '*/*' # matches every branch containing a single '/'
- '**' # matches every branch
- '!master' # excludes master
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
tag_check:
uses: foo/git-workflows/.github/workflows/tag_validation.yaml#master
And this is the reusable workflow file from the public git-workflows repo that has the script that should run on it. What is happening is that the workflow is trying to use a script inside the repo terraform-do-database
name: Tag Validation
on:
pull_request:
branches: [master]
workflow_call:
jobs:
tag_check:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
# Runs a single command using the runners shell
- name: Verify the tag value
run: ./scripts/tag_verify.sh
So the question: How can I make the workflow use the script stored in the git-worflows repo instead of the terraform-do-database?
I want to have a single repo where I can call the workflow and the scripts, I don't want to have everything duplicated inside all my repos.
I have found that if I wrap the script into a composite action. I can use GitHub context github.action_path to locate the scripts.
Example:
run: ${{ github.action_path }}/scripts/foo.sh
One way to go about this is perform a checkout inside your reusable workflow that essentially clones the content of the repo where your scripts are and only then you can access it. It's not the cleanest solution but it works.
Perform a second checkout, to clone your repo that has the reusable workflow into a dir reusable-workflow-repo
- name: Checkout reusable workflow dir
uses: actions/checkout#v3
with:
repository: <your-org>/terraform-do-database
token: ${{ secrets.GIT_ACCESS_TOKEN }}
path: reusable-workflow-repo
Now you have all the code you need inside reusable-workflow-repo. Use ${GITHUB_WORKSPACE} to find the current path and simply append the path to the script.
- name: Verify the tag value
run: ${GITHUB_WORKSPACE}/reusable-workflow-repo/scripts/tag_verify.sh
I was able to solve it adding a few more commands to manually download the script and execute it.
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
# Runs a single command using the runners shell
- name: Check current directory
run: pwd
- name: Download the script
run: curl -o $PWD/tag_verify.sh https://raw.githubusercontent.com/foo/git-workflows/master/scripts/tag_verify.sh
- name: Give script permissions
run: chmod +x $PWD/tag_verify.sh
- name: Execute script
run: $PWD/tag_verify.sh
Following Kaleby Cadorin example but for the case where the script is in a private repository
- name: Download & run script
run: |
curl --header "Authorization: token ${{ secrets.MY_PAT }}" \
--header 'Accept: application/vnd.github.raw' \
--remote-name \
--location https://raw.githubusercontent.com/COMPANY/REPO/BRANCH/PATH/script.sh
chmod +x script.sh
./script.sh
Note: GITHUB_TOKEN doesn't seem to work here, a PAT is required.
According to this thread on github-community the script needs to be downloaded/checked out separatly.
The "reusable" workflow you posted is not reusable in this sense, because since it is not downloading the script the workflow can only run within its own repository (or a repository that already has the script).

azure devops release stage bash script

I defined the following stage in my azure devops release :
steps:
- bash: |
# Write your commands here
echo 'Hello world'
curl -X POST -H "Authorization: Bearer dapiXXXXXXXX" -d #conf/dbfs_api.json https://adb-YYYYYYYY.X.azuredatabricks.net/api/2.0/jobs/create > file.json
displayName: 'Bash Script'
my repo has a folder called conf with the file dbfs_api.json inside of it , unfortunately this file is not found during the deployment of this stage and I get the following error:
Couldn't read data from file "D:ar1a/conf/dbfs_api.json", this makes an empty POST.
The release stages of an Azure Pipelines workflow don't perform a checkout by default. You can manually add a checkout task to the release stage to perform a checkout:
A deployment job doesn't automatically clone the source repo. You can checkout the source repo within your job with checkout: self. Deployment jobs only support one checkout step.
jobs:
- deployment:
environment: 'prod'
strategy:
runOnce:
deploy:
steps:
- checkout: self
Or you can create a pipeline artifact in the build stage and consume the artifact in a later artifact.
stages:
- stage: build
jobs:
- job:
steps:
- publish: '$(Build.ArtifactStagingDirectory)/scripts'
artifact: drop
- stage: test
dependsOn: build
jobs:
- job:
steps:
- download: current
artifact: drop
See:
Use Artifacts across stages
Deployment jobs

using `git commit —no-verify` for pre-commit azure pipeline

I see that I can use pre-commit with pipelines, is there a way to set up the yaml file for azure pipeline to use git commit --no-verify when if fails for specific cases? or is there a way to troubleshoot the pipeline when the issue occurs?
this is what I have for the yaml file
pool:
vmImage: ubuntu-18.04
variables:
PRE_COMMIT_HOME: $(Pipeline.Workspace)/pre-commit-cache
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: ${{ parameters.python }}
- script: |
echo "##vso[task.setvariable variable=PY]$(python -VV)"
displayName: set version variables
- task: CacheBeta#0
inputs:
key: pre-commit | .pre-commit-config.yaml | "$(PY)"
path: $(PRE_COMMIT_HOME)
- script: python -m pip install --upgrade pre-commit
displayName: install pre-commit
- script: pre-commit run --all-files --show-diff-on-failure
displayName: run pre-commit
Check the documentation here:
Not all hooks are perfect so sometimes you may need to skip execution
of one or more hooks. pre-commit solves this by querying a SKIP
environment variable. The SKIP environment variable is a comma
separated list of hook ids. This allows you to skip a single hook
instead of --no-verifying the entire commit.
$ SKIP=flake8 git commit -m "foo"

How to keep secure files after a job finishes in Azure Devops Pipeline?

Currently I'm working on a pipeline script for Azure Devops. I want to provide a maven settings file as a secure files for the pipeline. The problem is, when I define a job only for providing the file, the file isn't there anymore when the next job starts.
I tried to define a job with a DownloadSecureFile task and a copy command to get the settings file. But when the next job starts the file isn't there anymore and therefore can't be used.
I already checked that by using pwd and ls in the pipeline.
This is part of my current YAML file (that actually works):
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
I wanted to put the DownloadSecureFile task and "cp $(settingsxml.secureFilePath) ./settings.xml" into an own job, because there are more jobs that need this file for other branches/releases and I don't want to copy the exact same code to all jobs.
This is the YAML file as I wanted it:
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: provide_maven_settings
# no condition because all branches need the file
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- script: |
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
In my dockerfile the settings file is used like this:
FROM maven:3.6.1-jdk-8-alpine AS MAVEN_TOOL_CHAIN
COPY pom.xml /tmp/
COPY src /tmp/src/
COPY settings.xml /root/.m2/ # can't find file when executing this
WORKDIR /tmp/
RUN mvn install
...
The error happens, when docker build is started, because it can't find the settings file. It can though, when I use my first YAML example. I have a feeling that it has something to do with each job having a "Checkout" phase, but I'm not sure about that.
Each job in Azure DevOps is running on different agent, so when you use Microsoft Hosted Agents and you separator the pipeline to few jobs, if you copy the secure file in one job, the second job running in new fresh agent that of course don't have the file.
You can solve your issue by using Self Hosted agent (then copy the file to your machine and the second job running in the same machine).
Or you can upload the file to somewhere else (secured) that you can downloaded it in the second job (so why not do it from the start...).