Jfrog-Pipelines - Kill multiple runs of a pipeline branch via REST - jfrog-pipelines

I want to kill hundreds of pipeline runs of specific pipeline and specific branch (without deleting either of them). Any idea how I can do it?

This can be done via a script which first calls all the runs of pipeline with
GET /runs?pipelineIds=&statusCodes=
and then cancels them one by one using:
POST /runs/:runId/cancel
Status codes of incomplete runs are 4000, 4001, 4005, 4015, 4016, 4022
Refer documentation for more details

Related

Making a determination whether DB/Server is down before I kick-off a pipeline

I want to check whether the database/Server is Online before I kick off a pipeline. In the database is down I want to cancel the pipeline processing. I also would like to log the results in a table.
format (columns) : DBName Status Date
If the DB/Server is down then I want to send an email to concerned team with formatted table showing which DB/Servers are down.
Approach:
Run a query on each of the servers. If there is a result, then format output as shown above. I am using ADF pipeline to achive this. My issue is how do I combine various outputs from different servers.
For e.g.
Server1:
DBName: A Status: ONLINE runDate:xx/xx/xxxx
Server2:
DBName: B Status: ONLINE runDate:xx/xx/xxxx
I would like to combine them as follows:
Server DBName Status runDate
1 A ONLINE xx/xx/xxxx
2 B ONLINE xx/xx/xxxx
Use this to update the logging table as well as in the email if I were to send one out.
Is this possible using the Pipeline activities or do I have to use mapping dataflows?
I did similar work a few weeks ago. We make an API where we put all server-related settings or URL endpoint which we need to ping.
You don't require to store username-password (of SQL Server) at all. When you ping the SQL server, it will timeout if it isn't online. If it's online it will give you password related error. This way you can easily figure out whether it's up and running.
AFAIK, If you are using azure-DevOps you can use your service account to log into the SQL server. If you have set up an AD to log into DevOps, this thing can be done in the build script.
Both way you will be able to make sure whether SQL Server is Up and Running or not.
You can have all the actions as tasks in a yaml pipeline
You need something like below:
steps:
task: Check database status
register: result
task: Add results to a file
shell: "echo text >> filename"
task: send e-mail
when: some condition is met
There are several modules to achieve what you need. You need to find the right modules. You can play around with the flow of tasks by registering results and using the when clause.

Trigger Prow job reading parameters from the comment with some parameters

Git hub link to issue that i have raised:
https://github.com/kubernetes/test-infra/issues/25654
We have api tests that are triggered once user comments /test smoke on a PR request,but we want to make this job parametrized which will help run these tests with some parameter.
E.g /test smoke skip
Here we want utilise skip keyword in our job and take action accordingly.
This would enable jobs to run on some user based run time condition helping in creating less jobs.
As of now i dont see any way to pass any parameter with PR comment which can be utilised.
Any workaround to execute the job with parameters would be helpful.

Azure DevOps Deploy Release step on Failure

Our structure for a release in azure devops is to
deploy our app to our DEV environment.
Kick off my Selenium (Visual Studio) tests against that environment.
If passes, moves to our TEST environment.
If fails/hard stop.
We want to add new piece/functionality, starts same as above, Except instead of hard stop. 5) if default step fails, continue to next step. 6) New detail testing starts (turns on screen recorder)
The new detailed step has 'Agent Job' settings/parameters, I have the section "Run this job", set to "Only when previous job has failed".
My results have been, that if the previous/default/basic testing passed, the detailed step is skipped. As expected.
But if the previous step fails....the following new detailed step does not kick off.
Is it possible because the step is set up that if it fails hard stop and does not even evaluate the next step?
Or is it because the previous step says 'partially succeeded'. is this basically seen not as a failure?
Yes, this is correct. Because failed is equivalent of eq(variables['Agent.JobStatus'], 'Failed') status. But partially succeeded is eq(variables['Agent.JobStatus'], 'SucceededWithIssues').
Please check here.
You may try custom conditions like :
in(variables['Agent.JobStatus'], 'Failed', 'SucceededWithIssues')
As an addition to the solution, a piece I missed was on the 'detailed' job, the 'Trigger even when the selected stages partially succeed', also needed to be checked, as well as the solution for the same step above.

Bitbucket Pipeline schedule trigger

I can't see anyone talking about what I'm looking to do. I'm currently running a pipeline on a branch merge within the bitbucket area.
branches:
staging:
- step:
name: Clone
script:
- echo "Clone all the things!"
What I want to do is when a branch gets merged into master, trigger an event that will enable the schedule to run for the next day.
If there are no changes I don't want anything to run, however, if there are I want the schedule to kick in and work.
I've read through the Pipeline triggers:
https://support.atlassian.com/bitbucket-cloud/docs/pipeline-triggers/
But I can't see anywhere that would allow me to do it. Has anyone done this sort of thing? Is it possible, or am I limited by bitbucket itself?
Never done this, but there's an API for creating schedules. I think you would need to determine the date and specify the single cron task, e.g. March 30, 2022 at midnight:
0 0 30 3 * 2022
However year is an extension, not a standard CRON field; "at" is an alternative that may be accessible (but also not standard). It all depends on what Bitbucket allows for CRON schedule, so I think this is not a conclusive answer (still needs info on how to setup the schedule).
Here is the docs
https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines_config/schedules/

Notification when cant download artifact

We have a scheduled release in Octopus that deploys the last known good release to Prod back to Prod.
However this has started failing because the artifact has fallen out of our retention policy - this we can fix by altering the retention policy.
The real issue is that when it failed no notifications were sent to the team because artifact collection happens before even the first step.
I have tested this with a dummy release that just has a single basic step and then a Slack Notification step for when it fails. However, we never get to the first step - let alone the slack step.
How can i hook on to this failure so that we know about these issues in future.
You have to follow below steps to achieve the same
Step 1) Add Email Template step # First : to inform that Build is triggered
There is a setting in that called : Start Trigger set it to Run in parallel with the previous step so email will be triggered while your artifacts are downloading
Step 2) Add Email Template step # Last : to inform that build failed
Just change the setting Run Condition set it to : Failure: only run when a previous step failed
so when your deployment get fail, It will notify the same. You can add the cause of failure in email body using inbuilt variables also.