I want to see test trend for 1 month for example on a Dashboard or whatever. But I limit my jobs to max 10 stored builds because they take a lot of space. Can I somehow store data about passes/failed tests so the charts can be displayed and not store hundreds of builds?
We use the plugin 'Plot plugin'.
It saves values in 'property' files which are saved in the workspace folder. So the data is not lost when the build folders are gone. Each build appends it's results to the 'property' file.
You can configure a plot to include any number of builds.
Here is an example of one of the plots it has been making for us over the last year or so:
I recommend separating your build job (the one that constructs libraries and executables and that likely has large artifacts) from your test job (the one that executes tests). This allows you to control aspects of the test job independently of the build job, including how long to keep build results.
Using the Copy Artifacts plugin you can make the build artifacts available to the test job.
Related
I have a logic app that runs on occurrence initially that runs an ADF
pipeline which outputs a folder of files.
Then, I use a List Blobs action to pull one specific file
from the newly made folder and place its path on a queue.
And once a message is placed on that queue, it triggers the run of
another ADF pipeline.
The issue is I have not seen a way to get the output of the first ADF pipeline to put on the queue. I have tried to cheat within the List Blobs action that is sequential to the 1st ADF pipeline by explicitly searching the name of the output folder because it will be the same every time.
However, even after the 1st ADF is ran and produces the folder, within the first instance of this Logic App being ran the List Blobs can't find the folder and says the file path is not found.
Only after I run the Logic App a second time the folder is finally found which is not at all optimal. How can I fix this ? I prefer to keep everything in one logic app. Are there other Azure tools that can help in addition?
I am not having the details of the implementation but i am wondering if the message is written by the first pipeline is only used as a signal the second pipeline ? if thats the case why you cannot you call the second pipeline on completion of the first one ? may be these pipelines are on different ADF's ?
I suggest you to read and see if you can use the Event triggers
We have a very large application with nearly 2K testcases for regression. Our process is multiple sprints of work towards a single release. So, we use a dedicated regression test plan.
My question is how to manage regression runs? Right now, we clone the Master Regression suite or prior regression suite. This allows us to preserve the previous regression results. But this method creates new unique test cases, which doesn't keep associated bugs.
If we reset all the tests in the current suite, I know the previous runs could be seen at the test case level. However, I can't figure out how to call up historical aggregate results, for a previous run.
How should DevOps be used for managing repeat test runs?
How should DevOps be used for managing repeat test runs?
To repeat test, we could to insert parameters in test steps:
Create a parameter by typing a name preceded by "#" in the actions and expected results of test steps
You could check this document Repeat a test with different data for some more details.
For the historical aggregate results, there is an user voice about it on the Developer Community.
Hope this helps.
We have 100+ services/apps in a repository in Azure Devops. We have defined a single CI/CD YAML multistage pipeline for each (build and deployment). This limits blast radius and allows for auditability of each release of each project. We rely on templates for all the real pipeline work so this is easy to maintain; just a small root azure-pipelines.yml file for each project that includes the needed templates.
Now, we'd like to start using PR validation builds. And, as best as I can tell, we have two options:
Create a separate PR build for for every project and use the UI/API for policies to create 100+ policies
Create a single PR build that has stages for all 100+ projects.
I'm not a fan of the 1st option as now we'll have 200+ builds. The 2nd option is possible, but to avoid a 3 hour PR build, we'd need a way to only run needed stages (aka project builds).
Is there a 3rd option I'm missing? If the 2nd option is our best bet, how do we turn off stages for projects not changed in that PR (i.e. what condition would we use)?
(FYI, our policy is to change only one project per PR, but there are, on occasion exceptions to that.)
For personal suggestion, I also recommend the second method. Though the build script would be very large in one configure file, but much better than have hundreds build configuration files.
But the difficulty is these 100+ apps are all in one repository. This means all the normal method will not suitable for you, include using Build.Repository.Name value as the stage condition. Also, there's no more details which describing the source file path stored in the commit.
So, I suggest you and your team developers input the project name info into your commit message. Then, in the build pipeline you could use the variable Build.SourceVersionMessage to get its comment message. Since this is a environment variable which only work in step level(Not work for stage level and the job level), it needs you add one task in the first step and use the condition for it.
The logic of it is add one step as the first one in every stages. This step is only used to conditional judgment. If the Build.SourceVersionMessage matches the prefix or any key contents words, the jobs will be early-exit.
If use the condition like this:
condition: startsWith(variables['Build.SourceVersionMessage'], '[maven-plugin]')
It needs your commit message must follow a strict content writing format, starting with the specified project name.
Another condition can for you consider is:
condition: in(variables['Build.SourceVersionMessage'], 'maven-plugin')
This does not need the strict content writing format, but also need input the project name in the commit message. Thus it could be evaluated in the job condition with the above script.
Hope it could give you some help.
I had a quick question about the workflow plugin. I'm trying to see if the plugin will be able to satisfy my use case:
We have a jenkins job that will build our app
We want to spin off a suite of test jobs that will perform various tests on the newly built app (unit, integration, etc). These will need to be run in parallel and we want to run them on more than one jenkins node for performance reasons
We'll take the aggregated output from all our test processes from step 2 and be able to decide whether or not we should deploy (everything's passed) or not
I was curious as to whether or not I'd be able to accomplish this within the plugin and if so if you had any tips/pointers to a start.
Thanks!
You can certainly run nodes inside parallel branches. If one branch fails, the parallel step as a whole fails. If you want the build to succeed, but behave differently depending on test results, you can capture them directly as Groovy variables in various ways.
If you are using JUnitArchiver, currently it does not provide a simple means of exposing the test results directly to the Pipeline script (JENKINS-26276), though if you just want to tell if there are some failures or none, you can inspect currentBuild.status.
If you have JUnit-format test results and wish to automatically split them amongst various nodes (especially helpful in case you have a large pool of machines and it would be unmaintainable to manually divide your tests), see this demo of the Parallel Test Executor plugin’s splitTests step.
I'd like to write a pre-commit hook that tells you if you've improved/worsened some code metric of a project (i.e. average function length). The hook would have to know what the previous average function length was and I don't know where to store that information. One option would be to store an additional .metrics file in the repo but that sounds clunky. Another option would be to git stash, compute the metrics, git stash pop, compute the metrics again and print the delta. I'm inclined to go with the latter. Are the any other solutions?
Disclaimer: I am author of the Metrix++ tool, which I am using in the workflow I described below. I guess the same workflow can be executed with other tools capable to compare the results.
One of the ideas you suggested works perfectly, if you add a couple of CI checks (see the steps below). I find it solid. Not sure why you are considering it clunky.
I have got a file with metrics results which is updated before each commit and stored in VCS. Let's name this file metrics.db, and consider automation of the following workflow on build/test of a project:
1) if metrics.db has not been changed since last checkout (i.e. it is the original data for the previous/base revision), copy it to metrics-prev.db
2) Collect metrics for current code, what produces metrics.db file again. Note: It is very helpful when a metrics tool can do iterative scans for the best performance (i.e. calculate metrics for updated functions/classes), so it gives you the opportunity to run metrics tool on every build, including iterative.
3) Compare metrics-prev.db with metrics.db. If metrics identify regressions, fail the build and [optionally] do not allow to commit - team rule. If metrics are good, build is successful, and commit may happen.
4) [optionally] you may run Continuous Integration (CI) which validates that the actual committed metrics.db file corresponds to the committed code for the same revision (i.e. do the same 1-3 steps and make sure that the diff is zero at the step 3). If diff is not zero, it means somebody forgot to update the metrics.db file, and presumably did not execute pre-commit check, so revert the change.
5) [optionally] CI may do steps 1-3 if you fetch metrics.db as metrics-prev.db from the previous revision. In this case, CI may also check that the collected metrics.db is the same as committed (alternative or addition for the step 4).
Another implementation I have seen: metrics.db files are stored in a separate drive, out of VCS, and custom script is able to locate corresponding metrics.db for a revision. I find this solution unreliable as the drive can disappear, files can be moved and renamed, and so on. So, placing the file in VCS is better solution, but any will work.
I have attempted to do the alternative you suggested: switch to the previous revision and run the metrics tool twice. I abandoned this approach for several reasons: metrics check script alters your source files (so, it is impossible to include it into iterative rebuild and continue to work smoothly with your IDE as it will complain about changed files), and secondly it is very slow performance (comparing with iterative re-scans, it is extremely slow).
Hope it helps.