Adding configuration to Test plan in VSTS - azure-devops

I am facing issue with preserving the outcome from my last test execution.
e.g.
I have a Test Plan 'Relase2.0' and the assigned configuration is 'Win 10'
Now I run this test plan and I could see how many have passed, failed, blocked, not run.
Now I go and create a new configuration and assign this Test plan to the newly added configuration.
Now I switch back to my previous configuration from step 1
When I check the outcome from the step 2 execution, I see that I have lost those and I see all those in Not Run status.
Question: How do I preserve the execution from step 2.?

Related

How to allow Rundeck job to pickup latest node inventory list?

There are two Rundeck Jobs in the infra:
Job 01: (For local execution)
Will allow the user to upload a file consisting of servers list (Nodes: Execute Locally on rundeck)
The new nodes list will get updated to the new inventory file (resources.xml) which will be used by Rundeck henceforth
Triggers an API call to run Job 02
Job 02: (For remote execution)
Will run the Job for the updated inventory list.
Result: The runs were successful. The new nodes are reflecting to the latest inventory.
Problem: The issue is after 5 such executions, Rundeck uses the cache inventory names. For example, For Job execution #5, it uses #4 execution's inventory list. Is there any way this can be avoided? This could turn out to be a bigger issue when deployed in large.
You can decrease the model source cache delay time, go to Project Settings > Edit Nodes > Configuration tab, and set the Cache Delay textbox in seconds (default value: 30 seconds).

Argo workflow: execute a step when stopped forcefully

I have a 5 steps Argo-workflow:
step1: create an VM on cloud
step2: do some work
step3: do some more work
step4: do some further work
step5: delete the VM
All the above steps are time consuming. And for whatever reasons, a running workflow might be stopped or terminated by issuing the stop/terminate command.
What I want to do is, if the stop/terminate command is issued at any stage before step4 is started, I want to directly jump to step4, so that I can clean up the VM created at step1.
Is there any way to achieve this?
I was imagining it can happen this way:
Suppose I am at step2 when the stop/terminate signal is issued.
The pods running at step2 gets a signal that the workflow is going to be stopped.
The pods stop doing their current work and outputs a special string telling the next steps to skip
So step3 sees the outputs from step2, skips its work and passes it on to step4 and so on.
step5 runs irrespective of the input and deletes the VM.
Please let me know if something like this is achievable.
It sounds like step 5 needs to be run regardlessly, which is what exit handler is for. Here is an example. Exit handler would be executed when you 'stop' at any step, but would be skipped if you terminated the entire workflow.

Azure Pipelines: Parallel tasks inside single job in pipeline

Using Azure DevOps, I am creating a release pipeline in which I would like to have one stage having 1 job with 5 steps. First three steps are same task types but with different variables, and I would like to make them parallel so some flow would go like:
Job
parallel: step1, step2, step3
then: step4 (after all 3 parallel steps succeeded/failed)
then: step5 (after step4 is done)
This is the current job setup
I am not sure how to set up Control Options - run this task for all of these steps. I would need somehow to set the first three to run immediately (maybe Custom condition "always()") and step 4 and step 5 to run sequentially after previous steps are done.
Step 5 can be: Even if the previous task has failed unless the deployment was cancelled, but I am not sure if I set the same setting for step 4 will it consider only step 3 as previous or all three previous (step1 - step3) tasks.
Also for Execution plan Parallelism I guess it's ok to set multi-agent to three since I would have max 3 steps executing in parallel overall.
parallel: step1, step2, step3
If you have 5 task in one agent job, and just want to run previous three tasks parallel first, I'm afraid to say that this does not supported in Azure Devops.
While you put several tasks in one agent job, that's means the task will and must running in order. And also, as you mentioned in the second pic, the specified in Multi-agent is used for run agent job parallel, rather than run task parallel.
Fortunately, until now, there has been such suggestion raised by other user in our official Developer Community. And there are many users has the same demand with you. You can vote and comment there. Our Product Group team will take your suggestions kindly. If it has enough votes which means its high priority, and the team will consider it seriously.

How to run only specific task within a Job in Rundeck?

I have a job in Rundeck with many tasks within, but when some task fails I have to duplicate de Job, remove all the other tasks, save it and then run this new reduced copy of my original job.
Is there a way to run only specific tasks without having to do all this workaround?
Thanks in advance.
AFAIK there is no way to do that.
As a workaround, you can simply add options for every step in your Rundeck job, so for instances, if you have 3 script steps in your job, you can add 3 options named: skip_step_1, skip_step_2 and skip_test_3 and then assign true to the ones that have finished successfully and false to the one that has failed in the first execution. And for every script step, you can add a condition whether to run it or not.
A smiliar feature request is already proposed to the rundeck team :
Optionally execute workflow step based on job options

configuring multiple versions of job in spring batch

SpringBatch seems to be lacking the metadata for the job definition in the database.
In order to create a job instance in the database, the only thing it considers is jobName and jobParamter, "JobInstance createJobInstance(String jobName, JobParameters jobParameters);"
But,the object model of Job is rich enough to consider steps and listeners. So, if i create a new version of the existing job, by adding few additional steps, spring batch does not distinguish it from the previous version. Hence, if i ran the previous version today and run the updated version, spring batch does not run the updated version, as it feels that previous run was successful. At present, it seems like, the version number of the job, should be part of the name. Is this correct understanding ?
You are correct that the framework identifies each job instance by a unique combination of job name and (identifying) job parameters.
In general, if a job fails, you should be able to re-run with the same parameters to restart the failed instance. However, you cannot restart a completed instance. From the documentation:
JobInstance can be restarted multiple times in case of execution failure and it's lifecycle ends with first successful execution. Trying to execute an existing JobIntance that has already completed successfully will result in error. Error will be raised also for an attempt to restart a failed JobInstance if the Job is not restartable.
So you're right that the same job name and identifying parameters cannot be run multiple times. The design framework prevents this, regardless of what the business steps job performs. Again, ignoring what your job actually does, here's how it would work:
1) jobName=myJob, parm1=foo , parm2=bar -> runs and fails (assume some exception)
2) jobName=myJob, parm1=foo , parm2=bar -> restarts failed instance and completes
3) jobName=myJob, parm1=foo , parm2=bar -> fails on startup (as expected)
4) jobName=myJob, parm1=foobar, parm2=bar -> new params, runs and completes
The "best practices" we use are the following:
Each job instance (usually defined by run-date or filename we are processing) must define a unique set of parameters (otherwise it will fail per the framework design)
Jobs that run multiple times a day but just scan a work table or something use an incrementer to pass a integer parameter, which we increase by 1 upon each successful completion
Any failed job instances must be either restarted or abandoned before pushing code changes that affect the the job will function