Bitbucket Pipeline schedule trigger - deployment

I can't see anyone talking about what I'm looking to do. I'm currently running a pipeline on a branch merge within the bitbucket area.
branches:
staging:
- step:
name: Clone
script:
- echo "Clone all the things!"
What I want to do is when a branch gets merged into master, trigger an event that will enable the schedule to run for the next day.
If there are no changes I don't want anything to run, however, if there are I want the schedule to kick in and work.
I've read through the Pipeline triggers:
https://support.atlassian.com/bitbucket-cloud/docs/pipeline-triggers/
But I can't see anywhere that would allow me to do it. Has anyone done this sort of thing? Is it possible, or am I limited by bitbucket itself?

Never done this, but there's an API for creating schedules. I think you would need to determine the date and specify the single cron task, e.g. March 30, 2022 at midnight:
0 0 30 3 * 2022
However year is an extension, not a standard CRON field; "at" is an alternative that may be accessible (but also not standard). It all depends on what Bitbucket allows for CRON schedule, so I think this is not a conclusive answer (still needs info on how to setup the schedule).
Here is the docs
https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines_config/schedules/

Related

Jfrog-Pipelines - Kill multiple runs of a pipeline branch via REST

I want to kill hundreds of pipeline runs of specific pipeline and specific branch (without deleting either of them). Any idea how I can do it?
This can be done via a script which first calls all the runs of pipeline with
GET /runs?pipelineIds=&statusCodes=
and then cancels them one by one using:
POST /runs/:runId/cancel
Status codes of incomplete runs are 4000, 4001, 4005, 4015, 4016, 4022
Refer documentation for more details

Jenkinsfile - how to access other github files?

I'm performing an api call in my jenkinsfile that requires specifying a path to file 'A'. Assuming file A is located on the same repo, I am not sure how to refer to file A when running the jenkinsfile.
I feel like this has been done before, but I can't find any resource. Any help is appreciated.
You don't say whether you are using a scripted or declaritive Jenkinsfile, as the details differ. However the principle is the same as far as I am concerned. Basically to do anything with a file you will need to be within a node clause - essentially the controller opens a session on one of the agents and does actions there. You need to checkout your repo on that node:
The scripted Jenkinsfile would look something like (assuming you are not bothered about which node you are running on):
node("") {
checkout scm // "scm" equates to the configuration that the job was run with
// the whole repo will be now available
}

How do I know the total time I used to run workflow in github action?

Each time I create a PR or make commits, I have some workflows running.
But since I have a private repo and I get only 2000 min/month for running workflows on Github Actions, I wanted to track the time used. How do I know how much total time I used out of 2000 free min that Github provides?
Is there a place in Github UI that you see the total time you used/ total time remaining?
Once you are logged in to GitHub, you can view the GitHub actions minutes usage for your account at https://github.com/settings/billing under GitHub Actions as shown below
The above is documented in GitHub help too.
The best you can get is the view in the main actions tab:
Sadly, no simple sum/month or anything like that was added as of yet.
The next best thing you could try is to whip up a script that collects these values from the page's dom for you.
could use github cli to do this
createdAt=$(gh -R ${GITHUB_REPOSITORY} run list \
--json databaseId,createdAt --jq ".[]|select(.databaseId==${{ github.run_id }})|.createdAt")
usedSec=$(( `date +%s` - `date -d "$createdAt" +%s` ))

GitHub Actions: Are there security concerns using an external action in a workflow job?

I have a workflow that FTPs files by using an external action from someuser:
- name: ftp deploy
uses: someuser/ftp-action#master
with:
config: ${{ secrets.FTP_CONFIG }}
Is this a security concern? For example could someuser change ftp-action#master to access my secrets.FTP_CONFIG? Should I copy/paste their action into my workflow instead?
If you use ftp-action#master then every time your workflow runs it will fetch the master branch of the action and build it. So yes, I believe it would be possible for the owner to change the code to capture secrets and send them to an external server under their control.
What you can do to avoid this is use a specific version of the action and review their code. You can use a commit hash to refer to the exact version you want, such as ftp-action#efa82c9e876708f2fedf821563680e2058330de3. You could use a tag if it has release tags. e.g. ftp-action#v1.0.0
Although, this is maybe not as secure because tags can be changed.
Alternatively, and probably the most secure, is to fork the action repository and reference your own copy of it. my-fork/ftp-action#master.
The GitHub help page does mention:
Anyone with write access to a repository can read and use secrets.
If someuser does not have write access to the repository, there should be no security issue.
As commented below, you should specify the exact commit of the workflow you are using, in order to make sure it does not change its behavior without your knowledge.

Support for multiple repositories using Buildbot

Currently Buildbot does not support multiple repositories. If one desires to have this then separate instances of Buildbot need to be run.
Still I'm curious if anyone has come up with a creative workaround to get this feature working anyway.
Update
This answer received a few downvotes recently, please note that this answer applies to the releases of buildbot that were published/used around the end of 2012/beginning of 2013 and may not be applicable for future versions.
Original Answer
As #Macke said, buildbot (>= 0.8.x) supports multiple projects/repositories. This is done with configuration like the following:
# Set configuration to watch the Git repository for possible
# changes. When a change does occur the schedulers will be
# notified with the project data (TestProj).
c['change_source'] = []
c['change_source'].append(
GitPoller(
repourl ='git://github.com/SO/my_test_project.git',
project = 'TestProj',
branch = 'master',
workdir = '/home/buildmaster/repos/TestProj'
)
)
# Set the schedule to run on each change, but only for the project
# specified above via the project information.
c['schedulers'] = []
c['schedulers'].append(
SingleBranchScheduler(
name = "TestProj-master",
builderNames = ['TestProj-master-builder'],
change_filter = ChangeFilter(
project = 'TestProj',
branch = 'master'
)
)
)
You can see that the project parameter in the change source is then used again in the scheduler's change_filter property to ensure that the scheduler only responds to that particular change source. This allows you to configure multiple change sources and multiple schedulers responding to explicitly chosen change sources.
Since the 0.8.7p1 release, buildbot supports multiple codebases
Indeed i don't get the reason why you say that it does not support multiple repositories....you can create a poller for each repository and multiple schedulers that ping the different pollers and get the builds for many different repositories (either on the same machine where the master runs, or you can have a dedicated slave on a different box).
You want to avoid to have multiple instances, but for example, master and slave coexist on the same machine even if is a pain to start and stop them in order, otherwise you get conflict errors :)
|> Currently Buildbot does not support multiple repositories.
I don't really understand the question.. sorry. Do you mean that you have to run multiple master servers? It is actually advised by the buildbot devs to do so, but the contrary works for me: you can have in the same master.cfg multiple slaves (columns in the waterfall) and for each or them a BuildFactory with different first steps of the type: Git(repourl=...) and/or Mercurial(repourl=...) etc.
Each will clone/pull from different repositories and you can even add some more checkouts that are needed in subsequent steps (using maven or directly your scm client). The only issue with having a unique master.cfg file is that all builders will have only one method for getting notifications of changes; we have for example PBChangeSource() (master is notified by remote code, it has nothing to do). If for instance you have an SCM with good PBChangeSource support (e.g., svn, hg, git) and an other ones with bad support (e.g., MKS) then you should have two master server instances in order to cope with that.
Hope it'll help.