Multiple (github) PR checks from single azure pipeline yaml - azure-devops

Maybe i'm missing something obvious, but...
Can i have several (github) PR checks from a single azure pipelines yaml?
For example, on this screenshot i have CI connected to azure pipelines where build & running tests happen within the same check:
Can i somehow separate them so i have 2 checks: Build and running tests and see them pass|fail separately?

if its possible to have N checks in a single yaml and have their
statuses posted separately
For this issue, the answer is yes, you can achieve this with script approach.
Here is an issue about Multiple GitHub checks, in this issue, someone has the same problem as you, and got a solution and the exact config is given in it.
Since the build environment is a shell, for example, you could wrap your lint commands in a shell script that traps the exit code and sends the status to GitHub:
#!/bin/sh
npm run lint
EXIT_CODE=$?
if [[ $EXIT_CODE == 0 ]]
then
export STATUS="success"
else
export STATUS="failure"
fi
GITHUB_TOKEN=<your api token>
curl "https://api.github.com/repos/$CI_REPO/statuses/$CI_COMMIT?access_token=$GITHUB_TOKEN" \
-H "Content-Type: application/json" \
-X POST \
-d "{\"state\": \"$STATUS\", \"description\": \"eslint\", \"target_url\": \"$CI_BUILD_URL\"}"
exit $EXIT_CODE

Related

Ansible Tower: Writing job stdout to files

Is there any way in Ansible Tower to create log files for the each job stdout? I am aware that the below API will show me the output.
https://<tower_ip>/api/v2/jobs/<job id>/stdout/
But we also want these output into separate files for each job. Could you please help me with this?
According the Tower API Reference Guide Jobs it is possible to get via REST API call a formatted downloadable plain text, in example
curl --silent -u ${TOWER_USER}:${TOWER_PASSWORD} --location https://${TOWER_URL}/api/v2/jobs/${JobID}/stdout?format=txt_download --output job_${JobID}.log
and write it into a file like job_${JobID}.log.
Thanks to
man curl

How to identify infra already matches terraform config

I am creating a pipeline for executing terraform scripts in Azure DevOps, instead of running predefined terraform tasks(which wasn't included in our organization yet) I am planning to run the scripts through Azure CLI. My question is, is there a way we can identify in the terraform plan if "No changes. Your infrastructure matches the configuration.", so that I dont have to run the terraform apply.
I know that terraform apply won't harm if the Configuration matches. I am planning to skip that command, Is there a way to check the plan output and make the decision out of it through Azure CLI?
Yes, you can. You should use -detailed-exitcode. Here you have an example in bash:
terraform init
terraform plan -out=plan.out -detailed-exitcode
status=$?
echo "The terraform command exit status : ${status}"
#run apply only on changed detected
if [ $status -eq 2 ]; then
echo 'Applying terraform changes'
terraform apply -auto-approve plan.out
elif [ $status -eq 1 ]; then
exit 1
fi
Here you have link to documentation
Returns a detailed exit code when the command exits. When provided, this argument changes the exit codes and their meanings to provide more granular information about what the resulting plan contains:
0 = Succeeded with empty diff (no changes)
1 = Error
2 = Succeeded with non-empty diff (changes present)

Azure Devops Extension- PublishHTMLReport in Azure pipelines not producing HTML tab in azure devops

I am having issue with this extension (Azure Devops Extension- PublishHTMLReport) somehow HTML tab is not appearing in the Azure Devops.
The below config of this plugin:
htmltype: Jmeter
JmeterReportsPath: D:\a\r1\a\HTMLReports
Prior to the above task, I am using CMD task as below:
echo 'JMeter'
jmeter -n -t _JmeterTest\JmeterWebApp.jmx -l _JmeterTest\Summary.jtl -e -o HTMLReports
In the log, it is generating HTML content but not publishing HTML report using Publish HTML Report extension.
Check if the path of the report is consistent.
In this command line,
jmeter -n -t _JmeterTest\JmeterWebApp.jmx -l _JmeterTest\Summary.jtl -e -o HTMLReports
The output folder should be generated in the directory '$(Build.SourcesDirectory)', in your case it should be "D:\a\r1\s\".
However, on the publishhtmlreport task, I noticed that path you set is "D:\a\r1\a\HTMLReports", it equals to "$(Build.ArtifactStagingDirectory)\HTMLReports".
So, please try to change the JmeterReportsPath to "$(Build.SourcesDirectory)\HTMLReports" on the publishhtmlreport task to see if it work.
You also can reference to the sample here.
[UPDATE]
The extension PublishHTMLReports has some issues are causing the HTML report can't be publish to the Azure pipelines.
Turning to using the extension Html Viewer can solve the problem. It can work fine as expected.

How can i make my curl command work in gitlab-ci?

I have curl that i use in gitlab-ci job to upload an artifact to Nexus
the command is as follow (defined in .gitlab-ci.yml under script section)
cmd /c curl -v -u $env:USERREG:$env:PASSREG --upload-file $env:BINFILE $env:NEXUS_REGISTRY/$env:REPONAME$env:BINFILE
of course all the variables are declared in .gitlab-ci.yml file except for USERREG and PASSREG which i declared them using the gitlab GUI.
Notice that i am using:
- Gitlab Runner with docker-windows executor
- Windows docker container to exec the above command
PROBLEM : the job is stacked demanding for the user (defined by USERREG) password (PASSREG) until the job is terminated due to timeout.
How to fix this problem ? thank you.
I am not sure if this could be your problem, but please check and refer to the GitLab Variables which you set ( USERREG and PASSREG). If they are protected variables that means that they will be only available on a "protected" branches and in case you are pushing it from non-protected branch that could brings you to the state where you are currently because above mentioned variables are not available.
If this is the case, just make them to not be protected but masked and you should be fine.

Post-commit hook failed (exit code 3) with output

I'm trying to call a Jenkins job remotely using a post-commit script. I'm currently committing code through Eclipse Kepler/Subversive/SVNKit Connector.
post-commit script:
if svnlook dirs-changed -r "$REV" "$REPOS" | grep -qEe '^trunk/'; then
wget --post-data="job=APS-RemoteServerAction&token=SECRET&ACTION=deploy&ASSET_NAME=POST-COMMIT-TEST&DEPLOY_ENV=DEV&REVISION=$REV" "http://my.domain.com:8080/buildByToken/buildWithParameters"
fi
Screenshot of error through Eclipse:
Important notes:
Code does get committed properly, repository browser indicates a new version
The job runs on Jenkins, the history shows that
Everytime I commit, I get this error message
I tried adding the flag --quiet, but I got the same exit code.
I'm thinking it's due to wget and posting the values?
Edit #1
I would like to point out that I'm using the Jenkins Build Authorization Token Root Plugin. I switched to a POST instead of a GET (which works) due to eventually moving onto https and keeping the token out of the URL.
I interpret the error message to mean that wget can not write a file with the name buildWithParameters in its current directory. Use wget -O - to write the output to stdout.
The error is (I think) because it's trying to download the webpage to a local dir. You just need to ping the endpoint to make jenkins build, so I used the --spider (doesn't download), --no-proxy (I was getting cached responses sometimes) and -q (don't output, cuz svn will report it).
wget --post-data="job=APS-RemoteServerAction&token=SECRET&ACTION=deploy&ASSET_NAME=POST-COMMIT-TEST&DEPLOY_ENV=DEV&REVISION=$REV" "http://my.domain.com:8080/buildByToken/buildWithParameters" --spider --no-proxy -q