I'm using sbt-scoverage plugin for measure the code (statement) coverage in our project. Because of months of not worriying about the coverage and our tests we decided to set a threshold for having a minimum coverage percentage: if you are writing code at least try to leave the project with the same coverage percentage as when you've find it. e.g. if you've started your feature branch with a project having 63% of coverage you have, after finishing your feature, to leave the same coverage value.
With this we want to ensure a gradual adoption of better practices instead of setting a fixed coverage value (something like coverageMinimum := XX).
Having said that, I'm considering the possibility of storing the last value of the analysis in a file and then compare that with a new execution, triggered by the developer.
Another option that I'm considering is to retrieve this value from our SonarQube server based on the data stored there.
My question is: Is there a way to do a thing like this with sbt-scoverage? I've dug into the docs and their Google Groups forum but I can't find something about it.
Thanks in advance!
coverageMinimum setting value doesn't have to be constant, you can write any function dynamically returning it, eg:
coverageMinimum := {
val tmp = 2 + 4
10 * tmp // returns 60 :)
}
Related
I am trying to run a few tests on a very simple Simulink model on Matlab 2020a.
I have obtained test results by using the Test Manager app, which allows me to set up a test case.
The function I created is very simple, it just checks two boolean values and returns another boolean value in accordance to their own value, so I have not reported it here.
My procedure is as follows:
From Simulink Test Manager -> New Test File -> Test For Model Component -> Importing both Top Model and Component to create a Harness -> Using Design Verifier options (with the only changes from the default values being (1) Test Generation -> Model Coverage Objectives : MCDC ; and (2) Report -> Generate report of results) and IMPORTING Test harness inputs as a source -> Use component under test output as baseline -> saving data as an Excel sheet.
Tests are then generated and everything is working fine.
I then use a small python script to edit the Excel file, generating an Oracle with a structure like this:
time Var_A Var_B time Out1:1
AbsTol:0
type:boolean type:boolean Type:int8
Interp:zoh Interp:zoh Interp:zoh
0 0 1 0 0
0.4 1 1 0.4 1
0.8 0 0 0.8 TRUE
After this, I have to let Simulink write a PDF report of the project. To do so, I set up the following options:
From the test harness:
Inputs -> Include input data in test result; Stop simulation at last time point;
Baseline Criteria -> Include baseline data in test result;
Coverage Settings -> Record coverage for system under test; Record coverage for referenced models;
From the top level test folder:
Coverage Settings -> Record coverage for system under test; Record coverage for referenced models;
Coverage Metrics: Decision; Condition; MCDC;
Test File Options-> Close all open figures at the end of execution; Generate report after execution (with author and file path); Include Matlab version; Results for: All tests; Test Requirements; Plots of criteria and assessments; Simulation metadata; Error log and messages; Coverage results; File format PDF.
Then I let it run. The test manager tells me everything went fine, but for some reason, whenever it has to create a report, it throws me an error:
X_component_test: Input argument #1 is an invalid cvdata object. CVDATA objects become invalid when their associated models are closed or modified
Now, I am sure this worked fine before with much more complex components, but I have no idea what am I doing wrong here. Anyone got a clue?
In the end, the solution was much more simple than I thought. Just delete all .cv files and clean up your project's folder of all test files or unnecessary files. Matlab seems to have issues when there's way too many present.
Also the script had to be modified to remove that TRUE value and replace it with a 1.
I get so many frustrations with Azure DevOps. In my Build number format I would like to have both
A number that restart to 0 when I update my major an minor version.
But I also would like to have a real build number that is never reset whatever is my build number format. This build number can also be shared by all my build pipeline of my project. Is it possible?
I'm not using YAML format. I use the classic interfaces with the option page to set my build format. At this moment I have this:
It work except each month the r number restart at 0. I want it to continue.
EDIT
I still didn't decided my final format. I would like to understand all the possibilities. Now I discovered the $(BuildID) property I have another question. Is it possible to have something similar to $(Rev:r) variable but that only check the left part of my build number.
Example:
4.16.$(SequenceFor[4.16]).$(BuildID)
In fact I would like to manually set the Major and Minor version and let the system update one by one the Build and use the Revision for the global $(BuildID).
The $(rev:r) is restarted when the build number changes in any character, so this is the reason why it's restarted whenever the major/minor or the sate changed.
So if you want to use an incremental unique number you can't use the $(rev:r) because then it will be restarted each build.
If you want a number that depends on the major and the minor numbers you need to use the counter expression:
Create 2 variables:
major-minor = 4.16
And a variable that depends on his value and also is a counter:
revision = $[ counter(variables['major-minor'],0) ]
The build number will be:
$(major-minor).$(revision).$(Build.BuildId)
Now, if you will change the major-minor (to 4.17 or 5.16) the revision will be again 0.
We have a need to reset VSTS counter. I do not see any way to do this through UI. There is a way to do it by directly invoking reset build counter REST API, but in order to do this, you need to know the counter id, which you should be able to find out by invoking get a definition REST API. Unfortunately, no matter what I do get a definition call does not return build definition counter.
What am I missing?
Scott Dallamura from Microsoft wrote this thread:
the counters feature was experimental and removed back in March of
this year. I'm not sure how it even got into the documentation, but
I'll make sure it gets cleaned up.
I also didn't success to get the counterId in an API call.
At workaround, you can reset the revision of the build number if you change the build definition name, you can just add/remove a character.
Instead of trying to reset the counter variable, you could create a new variable with a GUID prefix.
This solution creates duplicate counters which might not be ideal but this gives you the ability to revert back to the previous counter values, if necessary.
Please see the following YAML code snippet
variables:
...
#Change this Guid if you require a reset seed on the same value.
resetCounterGuid: 'efa9f3f5-57fb-4254-8a7a-06d5bb365173'
buildrevision: $[counter(format('{0}\\{1}',variables['resetCounterGuid'],variables['YOUR_DEFINED_VARIABLE']),0)]
...
This might be a very simple question, but I am writing a little Macro for ImageJ and I cannot access the values in the Results log. Here is the code that does NOT work:
selectWindow("Results");
test=getResult("channel",0);
print("test");
Any tips on how this could be done? Thanks.
You were correct in pointing out that it might be due to a plugin not using the standard results table of ImageJ. The Color_Histogram plugin uses a non-standard way to report the results.
I filed a pull request on github.com that fixes this. When this pull request is merged and uploaded to the Fiji updater, the following macro code works as expected after running Analyze > Color Histogram:
test1 = getResultString("channel", 0);
print(test1);
test2 = getResult("mean", 0);
print(test2);
I am using Brett's Mr. PHP thumb caching script along with phpThumb to create my thumbs. It works extremely well, except for one thing... I cannot get it to set and process the UnSharpMask filter. The relevant part of the script looks like this:
// generate the thumbnail
require('../phpthumb/phpthumb.class.php');
$phpThumb = new phpThumb();
$phpThumb->setSourceFilename($image);
$phpThumb->setParameter('w',$width);
$phpThumb->setParameter('h',$height);
$phpThumb->setParameter('q','95'); // set the quality to 95%
$phpThumb->setParameter('fltr[]','usm|80|0.5|3'); // <--- THIS SHOULD SET THE USM FILTER
$phpThumb->setParameter('f',substr($thumb,-3,3)); // set the output file format
if (!$phpThumb->GenerateThumbnail()) {
error('cannot generate thumbnail');
}
I'm guessing there's a problem with my syntax, since the fltr[] parameter requires brackets. I have tried escaping the brackets like so: 'fltr[]' but that didn't work either.
I've used several other possible parameters with good success (zoom cropping, max height, max width, etc...) Everything seems to work except the filters (including usm - UnSharpMask).
I don't get any errors. It spits out thumbs all day long. They're just not sharpened.
For reference, here's the official phpThumb readme file that discusses all the possible settings and filters.
Thanks for looking.
Well after much trial, error, and aggravation, I finally discovered the solution buried in one of the demo files that come with phpThumb.
It's a matter of removing the brackets all together. Basically changing this line:
$phpThumb->setParameter('fltr[]','usm|80|0.5|3');
to this:
$phpThumb->setParameter('fltr','usm|80|0.5|3');
After that, it's only a matter of tweaking the settings to get the desired amount of sharpness.