Duplicate builds leading to wrong commit status in GitHub - github

This issue is also described in https://issues.jenkins.io/browse/JENKINS-70459
When using Jenkins, we noticed that the wrong pipeline status is often reported in GitHub PRs.
Further investigation showed very odd behavior. We have not yet found the cause of this problem (random?).
The 'Detail' link leads to the build which is successful.
Now comes the odd thing: The Jenkins log shows that the same build id was build twice!
First, it runs successful (trigger: PR Update). Here is an excerpt from the log:
{ [-]
   build_number: 2
   build_url: job/(...)/PR-2906/2/
   event_tag: job_event
   job_duration: 1108.635
   job_name: (...)/PR-2906
   job_result: SUCCESS
   job_started_at: 2023-01-19T14:41:14Z
   job_type: Pipeline
   label: master
   metadata: { [+]
   }
   node: (built-in)
   queue_id: 1781283
   queue_time: 5.063
   scm: git
   test_summary: { [+]
   }
   trigger_by: Pull request #2906 updated
   type: completed
   upstream:
   user: anonymous
}
Then, another run, under the exact same build id / url appears in the log:
{
   build_number: 2
   build_url: job/(...)/PR-2906/2/
   event_tag: job_event
   job_duration: 1.959
   job_name: (...)/PR-2906
   job_result: FAILURE
   job_started_at: 2023-01-20T07:14:50Z
   job_type: Pipeline
   label: master
   node: (built-in)
   queue_id: 2261495
   queue_time: 7.613
   test_summary: { [+]
   }
   trigger_by: Branch indexing
   type: completed
   upstream:
   user: anonymous
}
Notice that the trigger is now "Branch indexing". We do not know why this build happens but it is likely the root cause of this issue.
The failed build is not displayed in the Jenkins UI and the script console also returns #2 as the last successful build. We assume that this "corrupt" build is reported to GitHub. Does anyone have any ideas how this may happen? Any ideas are very welcome!
We checked our logs and tried to reproduce this behaviour - unsuccessful, so far.

¿Are you using Multibranch Pipeline plugin?
By default, Jenkins will not automatically re-index the repository for branch additions or deletions (unless using an Organization Folder), so it is often useful to configure a Multibranch Pipeline to periodically re-index in the configuration
Source: https://www.jenkins.io/doc/book/pipeline/multibranch/
Maybe this can also help: What are "Branch indexing" activities in Jenkins BlueOcean

Related

Can apim policy fragments be imported/exported

I've read the documentation and while the policy fragment idea seems good for code reuse, the system doesn't seem to provide a way to deploy them in an automated way.
I've even exported the entire configuration of the apim to git and could not find my policy fragment.
Seems like it's a very recent feature, we had the same problem, and as a first approach we decided to use terraform for deploying policy fragments from dev environment to stagging and production environments.
https://learn.microsoft.com/es-mx/azure/templates/microsoft.apimanagement/2021-12-01-preview/service/policyfragments?pivots=deployment-language-terraform
$computer> cat main.tf
terraform {
  required_providers {
    azapi = {
      source = "azure/azapi"
    }
  }
}
provider "azapi" {
}
resource "azapi_resource" "symbolicname" {
  type = "Microsoft.ApiManagement/service/policyFragments#2021-12-01-preview"
  name = “fragmentpolicyname”
  parent_id = "/subscriptions/[subscriptionid]/resourceGroups/[resourcegroupname]/providers/Microsoft.ApiManagement/service/[apimanagementservicename]”
  body = jsonencode({
    properties = {
      description = “fragment policy description”
      format = "xml" # it could also be rawxml
      value = <<EOF
<!--
    IMPORTANT:
    - Policy fragment are included as-is whenever they are referenced.
    - If using variables. Ensure they are setup before use.
    - Copy and paste your code here or simply start coding
 -->
 <fragment>
        //some magical code here that you will use in a lot of policies
 </fragment>
EOF
    }
  })
}
terraform init
terraform plan
terraform apply
You can integrate this part in your azure devops pipeline.

Azure DevOps REST api - Run pipeline with variables

I have a pipeline on Azure Devops that I'm trying to run programatically/headless using the REST api: https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/run%20pipeline?view=azure-devops-rest-6.0
So far so good, I can auth and start a run. I would like to pass data to this pipeline which the docs suggests is possible using variables in the request body. My request body:
{
"variables": {
"HELLO_WORLD": {
"isSecret": false,
"value": "HelloWorldValue"
}
}
}
My pipeline YAML looks like this:
trigger: none
pr: none
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
KEY=$(HELLO_WORLD)
echo "Hello world key: " $KEY
This however gives me an error that "HELLO_WORLD: command not found".
I have tried adding a "HELLO_WORLD" variable to the pipeline and enabled the "Let users override this value when running this pipeline"-setting. This results in the HELLO_WORLD variable no longer being unknown, but instead its stuck on its initial value and not set when i trigger a run with the REST api
How do you pass variables to a pipeline using the REST api? It is important that the variable value is set only for a specific run/build
I found another API to run a build, but it seems like you cannot use Personal Access Token auth with it, like you can with the pipeline api - only OAuth2 - https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/queue?view=azure-devops-rest-6.0
You can do it with both the Runs API and Build Queue API, both work with Personal Access Tokens. For which one is the better/preferred, see this question: Difference between Azure Devops Builds - Queue vs run pipeline REST APIs, but in short the Runs API will be the more future proof option
Option 1: Runs API
POST https://dev.azure.com/{{organization}}/{{project}}/_apis/pipelines/{{PipelineId}}/runs?api-version=6.0-preview.1
Your body will be of type application/json (HTTP header Content-Type is set to application/json) and similar to the below, just replace resources.repositories.self.refName with the appropriate value
{
"resources": {
"repositories": {
"self": {
"refName": "refs/heads/main"
}
}
},
"variables": {
"HELLO_WORLD": {
"isSecret": false,
"value": "HelloWorldValue"
}
}
}
Option 2: Build API
POST https://dev.azure.com/{{organization}}/{{project}}/_apis/build/builds?api-version=6.0
Your body will be of type application/json (HTTP header Content-Type is set to application/json), something similar to below, just replace definition.id and sourcebranch with appropriate values. Please also note the "stringified" content of the parameter section (it should be a string representation of a json map)
{
"parameters": "{\"HELLO_WORLD\":\"HelloWorldValue\"}",
"definition": {
"id": 1
},
"sourceBranch": "refs/heads/main"
}
Here's the way I solved it....
The REST call:
POST https://dev.azure.com/<myOrg>/<myProject>/_apis/pipelines/17/runs?api-version=6.0-preview.1
 
The body of the request:
{
    "resources": {
        "repositories": {
            "self": {
                "refName": "refs/heads/main"
            }
        }
    },
    "templateParameters": {
        "A_Parameter": "And now for something completely different."
    }
}
Note: I added an authorization header with basic auth containing a username (any name will do) and password (your PAT token value). Also added a Content-Type application/json header.
 
Here's the entire yaml pipeline I used:
 
parameters:
- name: A_Parameter
  displayName: A parameter
  default: noValue
  type: string
 
trigger:
- none
 
pool:
  vmImage: ubuntu-latest
 
steps:
 
- script: |
    echo '1 - using dollar sign parens, p dot A_Parameter is now: ' $(parameters.A_Parameter)
    echo '2 - using dollar sign double curly braces, p dot A_Parameter is now::' ${{ parameters.A_Parameter }} '::'
    echo '3 - using dollar sign and only the var name: ' $(A_Parameter)
  displayName: 'Run a multi-line script'
 
 
And here's the output from the pipeline log. Note that only the second way properly displayed the value.  
 
1 - using dollar sign parens, p dot A_Parameter is now: 
2 - using dollar sign double curly braces, p dot A_Parameter is now:: And now for something completely different. :: 
3 - using dollar sign and only the var name:

Publish Nunit Test Results in Post Always Section

I'm trying to run a pipeline that does some Pester Testing and publish the NUnit results.
New tests were introduced and for whatever the reason, Jenkins no longer publishes the test results and errors out immediately after the powershell script. Hence, it doesn't get to the nunit publish piece. I receive this:
ERROR: script returned exit code 128
Finished: FAILURE
I've been trying to include the publish in the always section of the post section of the Jenkinsfile, however, I'm running into problems on how to make that NUnit test file available.
I've tried establishing an agent and unstash the file (even though it probably won't stash if the powershell script cancels the whole pipeline). When I use agent I get the following exception:
java.lang.NoSuchMethodError: No such DSL method 'agent' found among steps
Here is the Jenkinsfile:
pipeline {
agent none
environment {
svcpath = 'D:\\svc\\'
unitTestFile = 'UnitTests.xml'
}
stages {
stage ('Checkout and Stash') {
agent {label 'Agent1'}
steps {
stash name: 'Modules', includes: 'Modules/*/**'
stash name: 'Tests', includes: 'Tests/*/**'
}
}
stage ('Unit Tests') {
agent {label 'Agent1'}
steps {
dir(svcpath + 'Modules\\'){deleteDir()}
dir(svcpath + 'Tests\\'){deleteDir()}
dir(svcpath){
unstash name: 'Modules'
unstash name: 'Tests'
}
dir(svcpath + 'Tests\\'){
powershell """
\$requiredCoverageThreshold = 0.90
\$modules = Get-ChildItem ../Modules/ -File -Recurse -Include *.psm1
\$result = Invoke-Pester -CodeCoverage \$modules -PassThru -OutputFile ${unitTestFile} -OutputFormat NUnitXml
\$codeCoverage = \$result.CodeCoverage.NumberOfCommandsExecuted / \$result.CodeCoverage.NumberOfCommandsAnalyzed
Write-Output \$codeCoverage
if (\$codeCoverage -lt \$requiredCoverageThreshold) {
Write-Output "Build failed: required code coverage threshold of \$(\$requiredCoverageThreshold * 100)% not met. Current coverage: \$(\$codeCoverage * 100)%."
exit 1
} else {
write-output "Required code coverage threshold of \$(\$requiredCoverageThreshold * 100)% met. Current coverage: \$(\$codeCoverage * 100)%."
}
"""
stash name: 'TestResults', includes: unitTestFile
nunit testResultsPattern: unitTestFile
}
}
post {
always {
echo 'This will always run'
agent {label 'Agent1'}
unstash name: 'TestResults'
nunit testResultsPattern: unitTestFile
}
success {
echo 'This will run only if successful'
}
failure {
echo 'This will run only if failed'
}
unstable {
echo 'This will run only if the run was marked as unstable'
}
changed {
echo 'This will run only if the state of the Pipeline has changed'
echo 'For example, if the Pipeline was previously failing but is now successful'
}
}
}
Any and all input is welcome! Thanks!
The exception you are getting is due to Jenkins' strict pipeline DSL. Documentation of allowable uses of agent are here.
Currently agent {...} is not allowed to be used in the post section. Maybe this will change in the future. If you require the whole job to run on the node that services label 'Agent1' the only way to currently do that is to
Put agent {label 'Agent1'} immediately under pipeline { to make it global
Remove all instances of agent {label 'Agent1'} in each stage
Remove the agent {label 'Agent1'} from the post section.
The post section acts more like traditional scripted DSL than the pipeline declarative DSL. So you have to use node() instead of agent.
I believe I've had this same question myself, and this SO post has the answer and some good context.
This Jenkins issue isn't exactly the same thing but shows the node syntax in the post stage.

Having trouble getting usable results from Watson's Document Conversion service

When I try to convert this document
https://public.dhe.ibm.com/common/ssi/ecm/po/en/poq12347usen/POQ12347USEN.PDF
with Watson's Document Conversion service, all I get is four answer units, one for each level-4 heading. What I really need is 47 answer units, one for each FAQ question. How can I achieve this?
Often a custom configuration can be used to produce more usable results in the case of a document such as this one.
The custom configuration can be passed to Document Conversion in a config form part on the request.
Please refer to the documentation (https://www.ibm.com/watson/developercloud/doc/document-conversion/customizing.shtml)
for more details on the options available. In this particular case, the following seems to give improved results:
{
  "conversion_target": "ANSWER_UNITS",
  "pdf": {
    "heading": {
      "fonts": [
        {"level": 1, "min_size": 14, "max_size": 80},
        {"level": 2, "min_size": 11, "max_size": 12, "bold": true},
        {"level": 3, "min_size": 9, "max_size": 11, "bold": true}
      ]
    }
  }
}

How to run continuous integration in parallel across multiple Pull Requests?

I am testing use of Jenkins with Github pull request builder plugin I have successfully set up a toy project on Github and dev installation of Jenkins so that raising a PR, or pushing changes to a PR branch triggers a build. Mostly this works as required - a few things don't match preferred workflow, but the freedom from having to write and maintain our own plugin is a big deal.
I have one potential showstopper. The plugin queues up all pushes in all PRs it sees, and only ever seems to run a single job at a time, even with spare executors available. In the real world project, we may have 10 active PRs, each may get a few pushed updates in a day in response to QC comments, and the full CI run takes > 30 mins. However, we do have enough build executors provisioned to run multiple jobs at the same time.
I cannot see any way to configure the PR request builder to process multiple jobs at once on the same trigger, but I may be missing something basic elsewhere in Jenkins. Is there a way to do this, without needing to customise the plugin?
I have installed Jenkins ver. 1.649 on a new Ubuntu 14.04 server (on a VirtualBox guest) and followed the README in the ghprb plugin (currently version 1.30.5), including setting up a jenkins "bot" account on Github as a collaborator to run all the integration API calls to Github.
I was wondering what the behaviour would be if I cloned the job (create new item and "Copy existing item"), and may try that next, but I expect that will result in the same job being run multiple times for no benefit as opposed to interacting smartly with other jobs polling the same pool of PRs.
I have found the config setting whilst exploring more for the question.
It is really easy when you know which config item it is, but Jenkins has a lot of configuration to work through, especially when you are exploring the plugins.
The key thing is that the option to serve queued jobs in parallel (available executors allowing) is core Jenkins config, and not part of the Github PR builder.
So, just check the option Execute concurrent builds if necessary. This option should be found at the bottom of the first, untitled section of config. It is a really basic Jenkins option, that a newbie like me missed due to the mountain of other options.
May be it is too late to answer this question, but after few days of researching I figured out a way to create multiple jobs per PR in github.
The code I am showing here applies to github enterprise, but it works well enough for the general github(bitbucket) as well with a few tweaks in url and git command.
The mainline repository against which the PRs are created needs to have a file, I call it PRJob.groovy and contains
import groovy.json.JsonSlurper
gitUrl = GIT_URL
repoRestUrl = "${GITHUB_WEB_URL}/repos/${project}/${repo}"
def getJSON(url) {
def conn = (HttpURLConnection) new URL(url).openConnection()
conn.setRequestProperty("Authorization", "token ${OAUTH_TOKEN}");
return new JsonSlurper().parse(new InputStreamReader(conn.getInputStream()))
}
def createPipeline(name, description, branch, prId) {
return pipelineJob(name) {
delegate.description description
if (ENABLE_TRIGGERS == 'true') {
triggers {
cron 'H H/8 * * *'
scm 'H/5 * * * *'
}
}
quietPeriod(60)
environmentVariables {
env 'BRANCH_NAME', branch
env 'PULL_REQUEST', prId
env 'GITHUB_WEB_URL', GITHUB_WEB_URL
env 'OAUTH_TOKEN', OAUTH_TOKEN
env 'PROJECT', project
env 'REPO', repo
}
definition {
cpsScm {
scriptPath "Jenkinsfile"
scm {
git {
remote {
credentials "jenkins-ssh-key"
delegate.url gitUrl
if (prId != "") {
refspec "+refs/pull/${prId}/*:refs/remotes/origin/pr/${prId}/*"
}
}
delegate.branch branch
}
}
}
}
}
}
def createPRJobs() {
def prs = getJSON("${repoRestUrl}/pulls?state=open")
if (prs.size() == 0) {
def mergedPrs = getJSON("${repoRestUrl}/pulls?state=closed")
if (mergedPrs.size() == 0) {
throw new RuntimeException("No pull-requests found; auth token has likely expired")
}
}
prs.each { pr ->
def id = pr.get("number")
def title = pr.get("title")
def fromRef = pr.get("head")
def fromBranchName = fromRef.get("ref")
def prRepo = fromRef.get("repo")
def repoName = prRepo.get("name")
def prHref = pr.get("url")
createPipeline("${repo}-PR-${id}-${fromBranchName}",
"${prHref} Pull Request ${id}: ${title}", "origin/pr/${id}/head", id)
}
}
createPRJobs()
This creates 1 jenkins job per PR.
This relies on the project having a Jenkinsfile which can be picked up for running a peipeline job. A sample Jenkinsfile will look like below:
//Jenkinsfile for building and creating jobs
commitId = null
repoRestUrl = "${GITHUB_WEB_URL}/repos/${PROJECT}/${REPO}"
try{
stage('Install and Tests') {
runTest("Hello")
}
notify_github 'success'
}catch (Exception e) {
notify_github 'failure'
print e
throw e
}
def runTest(String someDummyVariable) {
node {
checkout scm
sh 'git clean -qdf'
if (env.PULL_REQUEST == ""){
sh 'git rev-parse --verify HEAD > commit.txt'
} else {
// We check out PR after it is merged with master, but we need to report the result against the commit before merge
sh "git rev-parse refs/remotes/origin/pr/${env.PULL_REQUEST}/head^{commit} > commit.txt"
}
commitId = readFile 'commit.txt'
echo commitId
sh 'rm -f commit.txt'
//Here goes your code for doing anything
sh 'echo "Hello World!!!!!"'
}
}
def http_post(url, rawJson) {
def conn = (HttpURLConnection) new URL(url).openConnection()
conn.setRequestProperty("Authorization", "token ${OAUTH_TOKEN}");
conn.doOutput = true
conn.requestMethod = "POST"
conn.setRequestProperty("Content-Type", "application/json")
def wr = new OutputStreamWriter(conn.getOutputStream());
wr.write(rawJson);
wr.close()
def code = conn.getResponseCode()
if (code < 200 || code >= 300){
println 'Failed to post to ' + url
def es = conn.getErrorStream();
if (es != null) {
println es.getText()
}
}
}
def notify_github(state) {
http_post(
"${repoRestUrl}/statuses/${commitId}",
"""
{ "state": "${state}",
"target_url": "${env.BUILD_URL}",
"description": "Build Pipeline",
"context": "Build Pipeline"
}
"""
)
}
Hope this helps someone.