How to run actions without dependencies in sequence with Dagger - dagger

I am looking into Dagger, the CICD kit.
I understand that in the Dagger pipeline, when multiple actions are executed in succession and have dependencies on each other's actions, they can be executed in succession by setting the actions that depend on the inputs and outputs in the actions.
This can be understood from the samples and explanations on the official website.
https://docs.dagger.io/1221/action/#composite-actions
So, if I want to execute an action that does not depend on an action continuously, how can I set it up?
Thanks.

Related

Should I use child workflow or use activity to start new workflow

Like the title. Seems like both ways should work but child workflow seems easier.
It’s strongly recommended to always use activity to start new workflow and never use ChildWorkflow until the reset feature is working with Child Workflow https://github.com/uber/cadence/issues/3914
https://github.com/temporalio/temporal/issues/3141
To get result back to parent from child workflow, use signal. To link the two workflows, use search attributes when starting new workflows.
As Quanzheng said, if you need to use Reset, then Child Workflows are not currently an option.
Apart from that issue, the semantics of Child Workflows are quite different from starting a new workflow via an Activity.
The primary differences are that:
By default, Terminations and Cancellations are propagated to Child Workflows, although this can be overridden at Child Workflow creation time. This behavior is possible to implement with co-equal Workflows, but requires a careful Workflow implementation which never terminates without terminating its children.
Waiting for a Child Workflow to complete is directly supported in the Temporal API, whereas waiting for an arbitrary Workflow is not. See this issue.
Whether you need either of those capabilities, and whether you use Reset, should tell you if Child Workflows are appropriate to your use-case.

Reuse Jobs in GitHub Actions Workflow

I’m migrating a pipeline from Circle CI to Github Actions and am finding it a bit weird that I can only run jobs once instead of creating a job, then calling it from the workflow section, making it possible to call a job multiple times without duplicating the commands/scripts in that job.
My pipeline pushes out code to three environments, then runs a lighthouse scan for each of them. In circle ci I have 1 job to push the code to my envs and 1 job to run lighthouse. Then from my workflow section, I just call the jobs 3 times, passing the env as a parameter. Am I missing something or is there no way to do this in github actions? Do I just have to write out my commands 3 times in each job?
There are 3 main approaches for code reusing in GitHub Actions:
Reusing workflows
The obvious option is using the "Reusable workflows" feature that allows you to extract some steps into a separate "reusable" workflow and call this workflow as a job in other workflows.
Takeaways:
Reusable workflows can't call other reusable workflows.
The strategy property is not supported in any job that calls a reusable workflow.
Env variables and secrets are not inherited.
It's not convenient if you need to extract and reuse several steps inside one job.
Since it runs as a separate job, you have to use build artifacts to share files between a reusable workflow and your main workflow.
You can call a reusable workflow in synchronous or asynchronous manner (managing it by jobs ordering using needs keys).
A reusable workflow can define outputs that extract outputs/outcomes from executed steps. They can be easily used to pass data to the "main" workflow.
Dispatched workflows
Another possibility that GitHub gives us is workflow_dispatch event that can trigger a workflow run. Simply put, you can trigger a workflow manually or through GitHub API and provide its inputs.
There are actions available on the Marketplace which allow you to trigger a "dispatched" workflow as a step of "main" workflow.
Some of them also allow doing it in a synchronous manner (wait until dispatched workflow is finished). It is worth to say that this feature is implemented by polling statuses of repo workflows which is not very reliable, especially in a concurrent environment. Also, it is bounded by GitHub API usage limits and therefore has a delay in finding out a status of dispatched workflow.
Takeaways
You can have multiple nested calls, triggering a workflow from another triggered workflow. If done careless, can lead to an infinite loop.
You need a special token with "workflows" permission; your usual secrets.GITHUB_TOKEN doesn't allow you to dispatch a workflow.
You can trigger multiple dispatched workflows inside one job.
There is no easy way to get some data back from dispatched workflows to the main one.
Works better in "fire and forget" scenario. Waiting for a finish of dispatched workflow has some limitations.
You can observe dispatched workflows runs and cancel them manually.
Composite Actions
In this approach we extract steps to a distinct composite action, that can be located in the same or separate repository.
From your "main" workflow it looks like a usual action (a single step), but internally it consists of multiple steps each of which can call own actions.
Takeaways:
Supports nesting: each step of a composite action can use another composite action.
Bad visualisation of internal steps run: in the "main" workflow it's displayed as a usual step run. In raw logs you can find details of internal steps execution, but it doesn't look very friendly.
Shares environment variables with a parent job, but doesn't share secrets, which should be passed explicitly via inputs.
Supports inputs and outputs. Outputs are prepared from outputs/outcomes of internal steps and can be easily used to pass data from composite action to the "main" workflow.
A composite action runs inside the job of the "main" workflow. Since they share a common file system, there is no need to use build artifacts to transfer files from the composite action to the "main" workflow.
You can't use continue-on-error option inside a composite action.
Source: my "DRY: reusing code in GitHub Actions" article
I'm currently in the exact same boat and just found an answer. You're looking for a Composite Action, as suggested in this answer.
Reusable workflows can't call other reusable workflows.
Actually, they can, since Aug. 2022:
GitHub Actions: Improvements to reusable workflows
Reusable workflows can now be called from a matrix and other reusable workflows.
You can now nest up to 4 levels of reusable workflows giving you greater flexibility and better code reuse.
Calling a reusable workflow from a matrix allows you to create richer parameterized builds and deployments.
Learn more about nesting reusable workflows.
Learn more about using reusable workflows with the matrix strategy.

Alfresco, recognize when a workflow is started

I use Alfresco Community 5.2 and my need is to perform some work when one of the default Alfresco's workflow is started.
I could override all the workflows definitions, but I wonder if there is a better and quicker way to do that. The perfect would be a behavior which triggers when a workflow is started.
Is there something like that ?
Any other approach is accepted. Thanks.
There isn't anything similar to a behavior for workflows that I know of, although if your workflows will always have documents attached you could consider binding a behavior to the workflow package type (I don't recall off-hand what that type is--it might just be cm:folder which wouldn't be that useful).
This is kind of a hack suggestion, but you could implement a quartz job that would run every 30 seconds or every minute or so that would use the workflow service to check to see if any new workflows have started since the last check. If so, your code could be notified and passed the workflow ID, process ID, etc.
The straightforward solution is as you suggested in your original post--just modify the out-of-the-box processes with a task listener that fires when the workflow starts.
Following Jeff suggestion, and this tutorial, I managed to implement a task creation/completion listener and do my logic inside those blocks, resolving the problem.

BuildBot: execute build steps in parallel

Is it possible to instruct BuildBot to execute build steps in parallel?
I've been looking through documentation and it only seems to be possible by actually generating multiple builds / build factories.
I'm not entirely sure about Builders and Workers: I have seen that adding workers will allow me to run multiple build requests simultaneously (multiple programmers submitting PRs), but using multiple builders doesn't seem to be intended for anything like this.
So, is it possible?
You can have multiple builders executing simultaneously, for example if they listen to incoming commits on the same repository; a single commit will start all listening builders. In this scenario you can control parallelism using BuilderConfig's canStartBuild argument. And beware to have the builders work on separate resources!
Alternatively if you trigger multiple builders from a single builder and
specify waitForFinish=False, the triggered builders will run simultaneoulsy.
I believe you cannot execute build steps in parallel within a single builder. Regarding the workers, I can't tell you.

Perl Workflow manual action termination

Since i'm not able to solve the problem via validators
Perl Workflow module with validator,
i'm thinking of creating methods in action classes and call them when i need them.
Aproaching this way i will need a way of terminating the Action execution by returning "false" to the Workflow.
This way the workflow knows that it should not progress to the next state.
How can i manually terminate Action execution in perl Workflow?
Thank you
This sounds like a hack since Workflow is a state machine. You could look into Workflow::State for actions with several resulting states, as mentioned in the documentation for Workflow::Action execute.