Jenkins Workflow: How to get or set step id - jenkins-workflow

We would like to reference a step inside a parallel task of a Jenkins workflow. But it seems that step IDs are created non deterministic for steps in multiple parallel tasks. For a input step it is possible to manually specify the step id. Is it either possible to specify a step id for a shell step or query the step id?
The purpose is that we would like to create a link to a shell script inside a parallel task.

Indeed step IDs are not in general predictable in advance. Do you want to link to its log, like this? I am not entirely clear about the use case.
The closest existing RFE would be JENKINS-28119.

Perhaps you want to use the input ID instead of step ID:
node {
input id: 'input1', message: 'Continue?'
}
Then the input URL would exposed as:
http://[jenkins_base_URL]/job/[job_name]/[build_id]/input/Input1/proceedEmpty
(Note the capitalization on the input ID).

Related

How to provide dynamic values for approvals and checks in yaml pipelines?

I'm working on an integration between Azure Pipelines and ServiceNow's change management module. To achieve that the ServiceNow Change Management extension has been installed and configured according to this documentation page. In Azure DevOps we are using multistage yaml pipelines, which should create standard preapproved changes in ServiceNow.
The connection itself between the two applications works fine, I managed to put together a pipeline that creates change requests, waits until their status changes and then closes them. However, I'd like to pass some values set in the pipeline runs to the created change requests and I couldn't find a way to do it.
First I added a service connection to our Azure DevOps project, and created the ServiceNow check for it. I experimented a little with adding different expressions to it, like setting the short description to ${{ parameters.shortDescription }}, or defining a variable in the pipeline as ShortDescription: ${{ parameters.shortDescription }} and using that variable in the check as $(ShortDescription) or $[ variables.ShortDescription ]. Unfortunately none of these expressions got resolved. I also realized it is possible to use the predefined variables, but the values I'd like to set are not possible to describe by predefined variables. For example, selecting an assignment group would be pretty straightforward from a parameter defined as a list, but impossible to select from predefined variables.
So as a next idea, I tried to link a variable group to the check and update the variables through logging commands. Even though the variables from the group got resolved, they only showed the values I set them through the UI as a static default value. The dynamic values set via the logging commands were not visible. I played around some time and verified that I can update the definition of the variable groups through Azure CLI or REST API, so I can add new variables or update existing ones. Thus I tried to add a new variable to the linked group during the pipeline run named as ShortDescription_$(Build.BuildId). Even though it got added properly, I could not use it within the check, because it required double variable resolution, like $(ShortDescription_$(Build.BuildId)) and this expression was not resolved, not even partly. It remained $(ShortDescription_$(Build.BuildId)).
Then I started thinking about using only one variable from the group with a static name (e.g. ShortDescription) for all pipeline runs. However, I feel it would create a race condition and could cause some inconsistencies.
So as a last resort, I tried to put together an extension with an Agent and a ServerGate task, which are capable of storing the values I want to pass to change request and reading the stored values in an agentless environment. The problem here is, that the second task is not visible as a check for service connections. It's there as a release pipeline gate and looks good there, but I can't utilize it that way. Based on a question I found, this does not seem to be the problem with my task. To verify it, I copied the content of the same ServiceNow check I used before, and added it to my extension as a contribution with a different task id. And it did not show up as the question stated.
Which means now I can either
create a change request through my custom server task (as the ServerGate task can be used properly in yaml if it is changed to a Server task), but that way I can't wait for the state change of the ServiceNow ticket, or
create the change request in a separate stage where I want to use it, update it first in the same stage where I created it via the first-party check and wait for the state change in the stage where I would normally create it.
The second can work, but it has its own problems, like having misleading values stored in the changed request for the stage id field, or not having multiple change requests created for multiple run attempts of the deployment stage. Also I feel like it's not how the extension's task and check should be used.
Unfortunately, I'm out of ideas how this dynamic value passing can be achieved, if it's possible to do so in the first place. Could you please help me by sharing ideas, or pointing out errors in my attempts?

AZURE DATA FACTORY - Can I set a variable from within a CopyData task or by using the output?

I have simple pipeline that has a Copy activity to populate a table. That task is based on a query and will only ever return 1 row.
The problem I am having is that I want to reuse the value from one of the columns (batch number) to set a variable so that at the end of the pipeline I can use a Stored Procedure to log that the batch was processed. I would rather avoid running the query a second time in a lookup task so can I make use of the data already being returned?
I have tried duplicating the column in the Copy activity and then mapping that to something like #BatchNo but that fails and have even tried to add a Set Variable task but can't figure out how to take a single column #{activity('Populate Aleprstw').output} does not error but not sure what that will actually do in this case.
Thanks and sorry if its a silly question.
Cheers
Mark
I always do it like this:
Generate a batch number (usually with a proc)
Use a lookup to grab it into a variable
Use the batch number in all activities (might be multiple copes, procs etc.)
Write the batch completion
From your description it seems you have the batch embedded in the data copy from the start which is not typical.
If you must do it this way, is there really an issue with running a lookup again?
Copy activity doesn't return data like that, so you won't be able to capture the results that way. With this design, running the query again in a Lookup is the best option.
Is the query in the Source running on the same Server as the Sink? If so, you could collapse the entire operation into a Stored Procedure that returns the data point you are trying to capture.

Automation for ADF V2 Pipeline

I need help with implementation for below requirement:
There is one ADF pipeline that runs every two hours (with Tumbling window trigger), now i need to create one more pipeline that will be used for performing maintenance job . This pipeline is scheduled to run once a month (with schedule trigger). Here is the requirement that i'm trying to implement:
Now before running the second pipeline i need to make sure the first pipeline is not running (basically get the status and if its running wait for its completion) and then disable the trigger associated with it.
Run the second pipeline and after its completion , enable the trigger that is associated with first pipeline
Please let me know if this can be achieved within ADF or some kind of custom scripting needed to achieve the result.
First, your idea is achievable.
Second, if you want to use built-in feature in Azure Datafactory, then there is no way.
Basically, you need to use azure function(simple httptrigger, dont give any input, then you can hit and execute it directly.) to achieve your requirement that ADF can't do. From your description, the executing of these two pipelines are mutually exclusive, so you can use sdk to check to status of another pipeline in azure function. If another pipeline is running, then wait a few seconds then re-check the status of another pipeline.(In short, put the main logic and code in the azure function.)
Simple azure function:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=csharp
Use SDK to monitor:
https://learn.microsoft.com/en-us/azure/data-factory/monitor-programmatically#net
(The link I give is C#, you can choose other supported language.)

How to make a Sequential Http get calls from locust

In Locust Load test Enviroment tasks are defined and are called randomly.
But if i want a task to be performed just after a specific task. Then how do i do it?
for ex: after every 'X' url call i want 'Y' url to be called based on the response of 'X'.
In my experience, I found that it's better to model Locust tasks as completely independent of each other, and each of them covering a user scenario or behavior (eg. customer logs in, searches for a book and adds it to the cart). This is mostly because that's a closer simulation of the user's behavior.
Have you tried just having the multiple requests on the same task, and just if / else based on your responses? This slide from Carl Byström's talk follows said approach.
You just have to make a sequential gets or posts. When you define your task do something like this:
#task(10)
def my_task(l):
l.client.get('/X')
l.client.get('/Y')
There's an option to create a custom task set inherited from TaskSequence class.
Then you should add seq_task decorators to all task set methods to run its tasks sequentially.
https://docs.locust.io/en/latest/writing-a-locustfile.html#tasksequence-class

spring batch add job parameters in a step

I have a job with two steps. first step is to create a file in a folder with the following structure
src/<timestamp>/file.zip
The next step needs to retrieve this file and process it
I want to add the timestamp to the job parameter. Each job instance is differentiated by the timestamp, but I won't know the timestamp before the first step completes. If i add a timestamp at the beginning of the job to the job parameter then each time a new job instance will be started. any incomplete job will be ignored.
I think you can make use of JobExecutionContext instead.
Step 1 gets the current timestamp, use that to generate the file, and put to JobExecutionContext. Step 2 read from the JobExecutionContext to get the timestamp, which used to construct the input path for its processing.
Just to add something on top on your approach of splitting steps like this: You have to think twice whether this is really what you want. If Step 1 finished, and Step 2 failed, when the job instance is re-runed, it will start from Step 2, that means the file is not going to regenerate in Step 1 (because it is completed already). If it is what you look for, that's fine. If not, you may see if you want to put Step1 & Step2 in one step instead.