My query is for the example given here: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#concurrency
Example: Using a fallback value
I understood how concurrency works. But this example for the fallback value is not clear to me.
If you build the group name with a property that is only defined for specific events, you can use a fallback value. For example, github.head_ref is only defined on pull_request events. If your workflow responds to other events in addition to pull_request events, you will need to provide a fallback to avoid a syntax error. The following concurrency group cancels in-progress jobs or runs on pull_request events only; if github.head_ref is undefined, the concurrency group will fallback to the run ID, which is guaranteed to be both unique and defined for the run.
concurrency:
group: ${{ github.head_ref || github.run_id }}
cancel-in-progress: true
Can someone please explain this part "if github.head_ref is undefined"?
In this expression:
github.head_ref || github.run_id
github.head_ref may not be available.
According to github context:
github.head_ref
"The head_ref or source branch of the pull request in a workflow run. This property is only available when the event that triggers a workflow run is either pull_request or pull_request_target."
So, github.head_ref is only defined for pull_request or pull_request_target and where it is undefined the fallback mechanism will be the github.run_id.
Related
I have a pipeline in Azure DevOps which requires the parameter libName. The pipeline starts like this:
name: ${{ parameters.libName }}
trigger: none
pr: none
When I start the pipeline manually, a dialog box appears where I can provide the libName. So far so good.
However, I want to trigger this pipeline when I create a Pull Request. In that case I receive an error
A value for the 'libName' parameter must be provided.
Is there a way to provide this parameter when I create the PR?
It is currently not supported to pass parameter along with the Pull Request.
If you want to define a specific value for parameter, you must manually trigger this pipeline and put in the value.
For PR trigger, you are not allowed to define the parameter value automatically, try adding some default value for it.
To fix this error:
A value for the 'libName' parameter must be provided.
You could provide a default value for your parameter. For example:
parameters:
- name: libName
default: some library
Refer to this official doc for details: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script
I am developing a GitHub App using nodejs and probot framework. I can see the Application class (https://probot.github.io/api/latest/classes/application.html) of the probot framework contains events like :
> event: "pull_request" | "pull_request.assigned" |
> "pull_request.closed" | "pull_request.edited" | "pull_request.labeled"
> | "pull_request.opened" | "pull_request.reopened" |
> "pull_request.review_request_removed" |
> "pull_request.review_requested" | "pull_request.unassigned" |
> "pull_request.unlabeled" | "pull_request.synchronize
I have noticed that when the "Create pull request" button is clicked, then the pull_request as well as pull_request.opened events are fired.
In order to understand this behavior of firing multiple seemingly similar events upon the click of the same button, I tried to reopen a closed request and print out the Context object for both pull_request event as well as pull_request.reopened event.
I did a diff comparison of both the contexts and found that the contexts returned by both the events are
identical except that the context of pull_request event contained below additional properties :
merged: false,
mergeable: null,
rebaseable: null,
mergeable_state: 'unknown',
merged_by: null,
comments: 6,
review_comments: 0,
maintainer_can_modify: false,
commits: 1,
additions: 1,
deletions: 0,
changed_files: 1 },
repository:
{ id: 123456789,
node_id: '',
name: '',
full_name: '',
private: true,
owner: [Object],
html_url: 'some-url-here'
.
.
///////////////////--------many more urls-------//////////////////////
created_at: '2020-04-0',
updated_at: '2020-04-0',
We know that the general format of the context object returned is as follows :
Context {
name: 'pull_request',
id: '187128937812-8219-89891892133-16752-1234576765545',
payload:
{ action: 'reopened',
number: 1,
pull_request:
{ url:
.
.
.and so on.......
This above information is present in both the contexts. We can see that this also tells us about the specific action that was performed and this is denoted by the context.payload.action. So, if someone's requirement is to get hold of the pull_request.opened he/she could do it by just by using pull_request event as follows :
app.on('pull_request', async context => {
console.log('---------------------- on pull_request event')
console.log('Context returned :----', context)
})
And doesn't need to care about the other more specific events(here pull_request.opened) i.e. apart from what is achieved from the above code, below code would provide no real additional help :
app.on('pull_request.opened', async context => {
console.log('---------------------- on pull_request.opened')
console.log('Context returned :----', context)
})
So here is the question that's troubling me :
What is the purpose of the pull_request event , if its other specific forms (like pull_request.reopened) carry no different information(more precisely, if their contexts contain no different infos) ?
I am quite sure that there does lie some wisdom behind it. I'm not able to find anything on the internet, nothing in the docs that could explain this.
Please help me understand the hidden wisdom.
EDIT 1 : START
Forgot to mention one observation and that is : Reopening the pull request also triggers issue_comment.created event. So, So three events are triggered by one action(clicking Create Pull Request).
EDIT 2 : START
What is the purpose of the pull_request event , if its other specific forms (like pull_request.reopened) carry no different information(more precisely, if their contexts contain no different infos) ?
This is just a feature of Probot to simplify processing webhook events from GitHub. I'll try and explain why it's helpful.
If you were to consume webhook events without Probot, you would have to parse every pull_request event, check the action field for a case, and decide whether to handle it.
There are several events that have a top-level action field in the payload, incuding:
check_run
issue
project
pull_request
and many more in the docs...
Rather than make application developers perform this parsing and inspection of the JSON themselves they decided to simplify the callbacks so you can subscribe to webhooks using the specific [event].[action] pattern, and the framework takes care of invoking your callback when the matching event and action is received.
So you have two options for handling pull_request events:
if you don't know which events you need, or need to dynamically process events, subscribing to pull_request is how you would receive all pull request events
if you know which events you should handle, and can ignore the rest, subscribe to explicit pull_request.[event] should simplify your application code
You could also subscribe to *, which represents all events the probot app receives, rather than explicitly listing all supported events in your app.
We have a lot of triggered Tasks that run on the same pipelines, but with different parameters.
My question regarding this - is there a possible way, like a function or an expression to capture the name of the triggered task so that we could use the information while writing the reports and e-mails of which Task started the error pipeline. I can't find anything even close to it.
Thank you in advance.
This answer addresses the requirement of uniquely identify the invoker task in the invoked pipeline
For triggered tasks, there isn't anything provided out of the box by SnapLogic. Although, in case of ULTRA tasks you can get $['task_name'] from the input to the pipeline.
Out of the box, SnapLogic provides the following headers that can be captured and used in the pipeline being initiated by the triggered task. These are as follows.
PATH_INFO - The path elements after the Task part of the URL.
REMOTE_USER - The name of the user that invoked this request.
REMOTE_ADDR - The IP address of the host that invoked this request.
REQUEST_METHOD - The method used to invoke this request.
None of these contains the task-name.
In your case, as a workaround, to uniquely identify the invoker task in the invoked pipeline you could do one of the following three things.
Pass the task-name as a parameter
Add the task-name in the URL like https://elastic.snaplogic.com/.../task-name
Add a custom header from the REST call
All the three above methods can help you capture the task-name in the invoked pipeline.
In your case, I would suggest you go for a custom header because the parameters you pass in the pipeline could be task-specific and it is redundant to add the task-name again in the URL.
Following is how you can add a custom header in your triggered task.
From SnapLogic Docs -
Custom Headers
To pass a custom HTTP header, specify a header and its value through the parameter fields in Pipeline Settings. The
request matches any header with Pipeline arguments and passes those to
the Task, while the Authorization header is never passed to the
Pipeline.
Guidelines
The header must be capitalized in its entirety. Headers are case-sensitive.
Hyphens must be changed to underscores.
The HTTP custom headers override both the Task and Pipeline parameters, but the query string parameter has the highest precedence.
For example, if you want to pass a tenant ID (X-TENANT-ID) in a
header, add the parameter X_TENANT_ID and provide a default or leave
it blank. When you configure the expression, refer to the Pipeline
argument following standard convention: _X_TENANT_ID. In the HTTP
request, you add the header X-TENANT-ID: 123abc, which results in the
value 123abc being substituted for the Pipeline argument X_TENANT_ID.
Creating a task-name parameter in the pipeline settings
Using the task-name parameter in the pipeline
Calling the triggered task
Note: Hyphens must be changed to underscores.
References:
SnapLogic Docs - Triggered Tasks
I'm adding this as a separate answer because it addresses the specific requirement of logging an executed triggered task separate from the pipeline. This solution has to be a separate process (or pipeline) instead of being part of the triggered pipeline itself.
The Pipeline Monitoring API doesn't have any explicit log entry for the task name of a triggered task. invoker is what you have to use.
However, the main API used by SnapLogic to populate the Dashboard is more verbose. Following is a screenshot of how the response looks on Google Chrome Developer Tools.
You can use the invoker_name and pipe_invoker fields for identifying a triggered task.
Following is the API that is being used.
POST https://elastic.snaplogic.com/api/2/<org snode
id>/rest/pm/runtime
Body:
{
"state": "Completed,Stopped,Failed,Queued,Started,Prepared,Stopping,Failing,Suspending,Suspended,Resuming",
"offset": 0,
"limit": 25,
"include_subpipelines": false,
"sort": {
"create_time": -1
},
"start_ts": null,
"end_ts": null,
"last_hours": 1
}
You can have a pipeline that periodically fires this API then parses the response and populates a log table (or creates a log file).
So as a part of some tests to automatically accept / merge successful pipelines in our git repository i was running some tests to flag the "merge when pipeline succeeds" feature when the pipeline is still running:
So this button is available when the pipeline is still running and will convert to a green 'Accept merge' button when the pipeline succeeds:
(note that this picture was taken afterwards not to confuse the use-case)
additionally i have set these general settings:
So when checking the Gitlab API Documentation it says i should use the following endpoint:
PUT /projects/:id/merge_requests/:merge_request_iid/merge
when using the parameter ?merge_when_pipleline_succeeds=true it should flag the button.
However when i call this endpoint when the pipeline is still running (i built in a wait for 10 mins while testing this) i get the following result:
i am getting a Method Not Allowed. My assumption is that the endpoint i am using is correct because otherwise i would've gotten a bad request / not found return code.
when checking the gitlab merge request i am seeing that indeed the flag is not set to true:
However, when i manually click the blue button the mergerequest looks like this:
Also if i let the pipeline finish and then proceed to call the merge api (w/ or w/o the merge when pipeline succeeds flag) it will accept the merge. It just does not work when the pipeline is running (which is odd because even the button itself only shows when the pipeline is running)
so i am wondering what I am doing wrong here.
I am using a Powershell module to call the GitLab API. The Accept part of the module is not official and was made by myself because i found this feature missing.
I am using the same credentials for the API w/ a personal access token to authenticate to the API. Other functionality of the API work with this token like creating merge requests, retrieving status of a current MR and accepting a MR when the pipeline is finished.
I have tried the following variants :
Use the V3 api with merge_when_build_succeeds=true --> nets the same
result
Uncheck the "Only allow merge request to be merged if the
pipeline succeeds" --> nets the same result
Use ID of the merge request instead of IID
use /merge_when_pipeline_succeeds instead of ?merge_when_pipeline_succeeds=true
use True instead of true --> nets the same result
I get a similar issue with the python-gitlab library on v4. It works sometimes when I use:
mr.merge(merge_when_pipeline_succeeds=True)
Where mr is a ProjectMergeRequest object. However, if the MR has a merge conflict in it I get that 405 Method Not Allowed error back.
My best guess is to see if I can apply logic before calling mr.merge() to check for problems. Will update this if that works.
UPDATE: Looks like there is no feature to check for conflicts as of today. https://gitlab.com/gitlab-org/gitlab-ce/issues/41762
UPDATE 2: You can check merge_status when looking at the MR information, so either that attribute or an exception then mr.merge() fails would let you identify when it won't work.
I need to check via GitHub API if a pull request passed all required status checks. I use GitHub Enterprise 2.8 at this moment.
I know that I can get all status checks for last commit (following statuses_url in pull request). However, I don't know which status checks are set up to be required in a given repository. This is my main problem.
I also need to aggregate these status checks, group them by context and take latest in each context. It's ok, but seems to be reimplementation of logic, that GitHub performs internally when decides if a pull request can be merged.
For my case, it would be ideal to have something like can_be_merged in pull request fields, which meaning is mergeable && all required status checks passed && approved, but as far as I know there is no such field.
Finally solved this! You actually need to get the information off the protected branch, not off the check itself. Here are some API details: https://developer.github.com/v3/repos/branches/#list-required-status-checks-contexts-of-protected-branch.
So the flow to solve this is:
Check if the base branch for the PR is protected, and if so;
Use the above endpoint to determine which checks are required;
Compare the checks on the latest PR commit to the required checks determined in step 2.
Based on #moustachio's answer, if you are using v4 graphql API, you may use
pullRequest(...) {
baseRef {
refUpdateRule {
requiredStatusCheckContexts
}
}
}
to get the same info without additional API request