I’m migrating a pipeline from Circle CI to Github Actions and am finding it a bit weird that I can only run jobs once instead of creating a job, then calling it from the workflow section, making it possible to call a job multiple times without duplicating the commands/scripts in that job.
My pipeline pushes out code to three environments, then runs a lighthouse scan for each of them. In circle ci I have 1 job to push the code to my envs and 1 job to run lighthouse. Then from my workflow section, I just call the jobs 3 times, passing the env as a parameter. Am I missing something or is there no way to do this in github actions? Do I just have to write out my commands 3 times in each job?
There are 3 main approaches for code reusing in GitHub Actions:
Reusing workflows
The obvious option is using the "Reusable workflows" feature that allows you to extract some steps into a separate "reusable" workflow and call this workflow as a job in other workflows.
Takeaways:
Reusable workflows can't call other reusable workflows.
The strategy property is not supported in any job that calls a reusable workflow.
Env variables and secrets are not inherited.
It's not convenient if you need to extract and reuse several steps inside one job.
Since it runs as a separate job, you have to use build artifacts to share files between a reusable workflow and your main workflow.
You can call a reusable workflow in synchronous or asynchronous manner (managing it by jobs ordering using needs keys).
A reusable workflow can define outputs that extract outputs/outcomes from executed steps. They can be easily used to pass data to the "main" workflow.
Dispatched workflows
Another possibility that GitHub gives us is workflow_dispatch event that can trigger a workflow run. Simply put, you can trigger a workflow manually or through GitHub API and provide its inputs.
There are actions available on the Marketplace which allow you to trigger a "dispatched" workflow as a step of "main" workflow.
Some of them also allow doing it in a synchronous manner (wait until dispatched workflow is finished). It is worth to say that this feature is implemented by polling statuses of repo workflows which is not very reliable, especially in a concurrent environment. Also, it is bounded by GitHub API usage limits and therefore has a delay in finding out a status of dispatched workflow.
Takeaways
You can have multiple nested calls, triggering a workflow from another triggered workflow. If done careless, can lead to an infinite loop.
You need a special token with "workflows" permission; your usual secrets.GITHUB_TOKEN doesn't allow you to dispatch a workflow.
You can trigger multiple dispatched workflows inside one job.
There is no easy way to get some data back from dispatched workflows to the main one.
Works better in "fire and forget" scenario. Waiting for a finish of dispatched workflow has some limitations.
You can observe dispatched workflows runs and cancel them manually.
Composite Actions
In this approach we extract steps to a distinct composite action, that can be located in the same or separate repository.
From your "main" workflow it looks like a usual action (a single step), but internally it consists of multiple steps each of which can call own actions.
Takeaways:
Supports nesting: each step of a composite action can use another composite action.
Bad visualisation of internal steps run: in the "main" workflow it's displayed as a usual step run. In raw logs you can find details of internal steps execution, but it doesn't look very friendly.
Shares environment variables with a parent job, but doesn't share secrets, which should be passed explicitly via inputs.
Supports inputs and outputs. Outputs are prepared from outputs/outcomes of internal steps and can be easily used to pass data from composite action to the "main" workflow.
A composite action runs inside the job of the "main" workflow. Since they share a common file system, there is no need to use build artifacts to transfer files from the composite action to the "main" workflow.
You can't use continue-on-error option inside a composite action.
Source: my "DRY: reusing code in GitHub Actions" article
I'm currently in the exact same boat and just found an answer. You're looking for a Composite Action, as suggested in this answer.
Reusable workflows can't call other reusable workflows.
Actually, they can, since Aug. 2022:
GitHub Actions: Improvements to reusable workflows
Reusable workflows can now be called from a matrix and other reusable workflows.
You can now nest up to 4 levels of reusable workflows giving you greater flexibility and better code reuse.
Calling a reusable workflow from a matrix allows you to create richer parameterized builds and deployments.
Learn more about nesting reusable workflows.
Learn more about using reusable workflows with the matrix strategy.
Related
I'm evaluating temporal.io as an modern workflow-as-code alternative for BPMN based solutions such as Camunda.
In my scenario workflow orchestrates activity workers, which calls external microservices for business transactions. Business transactions may encounter business exceptions or require human action to proceed the flow, and rises required user tasks. Workflow should block at certain points until there are no blocking tasks for that specific activity.
Should the blocking task logic reside inside activities and services, keeping the workflow definition more abstract and deterministic? I premuse an activity should simply throw an runtime exception when there is a blocking task, is that right? Then, how do I continue the workflow when the task is completed?
Or should I use workflow signals to mimic BPMN user tasks and if so, how do I send a signal from an external service to a specific workflow instance?
Probably easiest way is to have an activity that notifies your external system
responsible for interacting with human actors, and then use signals to notify workflow of the completion of the human decision.
With Temporal you can write workflow code that waits for multiple signals in case there are multiple actors/decisions involved.
Other options could be to store a list of tasks in an external system and notify your workflow from that system directly (again via signals), or could have a workflow per "approver" which can hold a list of assigned tasks inside these workflows that you could query state, or could have them send notifications when all tasks have been performed.
how do I send a signal from an external service to a specific workflow instance?
You would use Temporal SDK client api to send a signal to a workflow execution thats uniquely identified via namespace name, task queue name workflow id. Not sure which programming language you are using but maybe this Go sample can help.
Like the title. Seems like both ways should work but child workflow seems easier.
It’s strongly recommended to always use activity to start new workflow and never use ChildWorkflow until the reset feature is working with Child Workflow https://github.com/uber/cadence/issues/3914
https://github.com/temporalio/temporal/issues/3141
To get result back to parent from child workflow, use signal. To link the two workflows, use search attributes when starting new workflows.
As Quanzheng said, if you need to use Reset, then Child Workflows are not currently an option.
Apart from that issue, the semantics of Child Workflows are quite different from starting a new workflow via an Activity.
The primary differences are that:
By default, Terminations and Cancellations are propagated to Child Workflows, although this can be overridden at Child Workflow creation time. This behavior is possible to implement with co-equal Workflows, but requires a careful Workflow implementation which never terminates without terminating its children.
Waiting for a Child Workflow to complete is directly supported in the Temporal API, whereas waiting for an arbitrary Workflow is not. See this issue.
Whether you need either of those capabilities, and whether you use Reset, should tell you if Child Workflows are appropriate to your use-case.
I am looking into Dagger, the CICD kit.
I understand that in the Dagger pipeline, when multiple actions are executed in succession and have dependencies on each other's actions, they can be executed in succession by setting the actions that depend on the inputs and outputs in the actions.
This can be understood from the samples and explanations on the official website.
https://docs.dagger.io/1221/action/#composite-actions
So, if I want to execute an action that does not depend on an action continuously, how can I set it up?
Thanks.
Is it possible to instruct BuildBot to execute build steps in parallel?
I've been looking through documentation and it only seems to be possible by actually generating multiple builds / build factories.
I'm not entirely sure about Builders and Workers: I have seen that adding workers will allow me to run multiple build requests simultaneously (multiple programmers submitting PRs), but using multiple builders doesn't seem to be intended for anything like this.
So, is it possible?
You can have multiple builders executing simultaneously, for example if they listen to incoming commits on the same repository; a single commit will start all listening builders. In this scenario you can control parallelism using BuilderConfig's canStartBuild argument. And beware to have the builders work on separate resources!
Alternatively if you trigger multiple builders from a single builder and
specify waitForFinish=False, the triggered builders will run simultaneoulsy.
I believe you cannot execute build steps in parallel within a single builder. Regarding the workers, I can't tell you.
I'm looking into using Quartz Scheduler, and I was wondering if it was possible to schedule jobs not by time, but when another job finishes. So, when Job A is done, it starts Job B. When that's done, it starts Job C, etc.
Job A -> Job B -> Job C -> Job A... continuously.
Is this the right tool for the job? Or should I be looking into something else?
Check out JobChainingJobListener, built-in to Quartz (bold mine):
Keeps a collection of mappings of which Job to trigger after the completion of a given job. If this listener is notified of a job completing that has a mapping, then it will then attempt to trigger the follow-up job. This achieves "job chaining", or a "poor man's workflow".
That's right, you are looking for a process or workflow engine. Have a look at activiti or jbpm.
You may want to check the QuartzDesk project I have been involved in. QuartzDesk is a management and monitoring platform for Quartz-based apps and in version 2.0 we have added a new job chaining engine to the platform.
The engine allows you to orchestrate the execution of your jobs and there is no need to modify your application code in any way. Job chains can be dynamically updated through the QuartzDesk GUI without any disruption to your application.