I would like to have my regression tests generated by Randoop 4.2 to be named (Something)RandoopTest.java, so I set the command line flag --regression-test-basename=RandoopTest. When I run Randoop, my regression tests all end in RandoopTest0.java (note the 0). For example, when I run the following command,
java -classpath C:\sandbox\TestJavaProject\bin;%RANDOOP_JAR%
randoop.main.Main gentests
--testclass=org.jaffa.datatypes.Currency
--junit-package-name=org.jaffa.datatypes
--junit-output-dir=C:\sandbox\TestJavaProject\src\test\java
--regression-test-basename=CurrencyRandoopTest
--time-limit=10
--junit-reflection-allowed=false
--flaky-test-behavior=DISCARD
I get two files generated, as shown by the console output:
Writing regression JUnit tests...
Created file C:\sandbox\TestJavaProject\src\test\java\org\jaffa\datatypes\CurrencyRandoopTest0.java
Created file C:\sandbox\TestJavaProject\src\test\java\org\jaffa\datatypes\CurrencyRandoopTestDriver.java
Wrote regression JUnit tests.
Is there an easy way to eliminate the 0? I'm curious as to why it is there. I've never noticed any other numbers besides 0 added to the class name. Currently, our maven scripts are designed to run JUit tests that end in "Test", so the 0 is a bother. (I know I can modify the scripts to automatically run JUnit tests that end in "Test0", but it would be nicer if I didn't have to.)
Randoop outputs tests in files named ...Test0,java, ...Test1.java, etc. By default only 500 are put into each numbered file. Randoop usually creates many tests, and putting all of them into one file would violate Java's limits on file size.
Randoop outputs a test suite, which runs all the tests, in a file named ...Test.java with no number at the end of the file name.
The example at the beginning of the Randoop Manual shows these files and how to use them.
If you have never seen Randoop output more than 500 tests, you might want to adjust the command-line arguments to Randoop, to give it more information so that it can generate tests more effectively.
Example
As a concrete example of the files, see directory build/working-directories/naive-collections-test/java/foo/bar/. It will exist after you have run Randoop's own test suite.
It contains the files:
NaiveRegression.java
NaiveRegression0.java
NaiveRegression1.java
NaiveRegression2.java
where the content of NaiveRegression.java is:
#RunWith(Suite.class)
#Suite.SuiteClasses({ NaiveRegression0.class, NaiveRegression1.class, NaiveRegression2.class })
public class NaiveRegression {
}
Related
I'm using preaty old Matlab (version 7.1.0.246 (R14) Service Pack 3) :(
I have some toolbox I was provided which I'd like to use. When I try to execute the function, I'm getting Undefined command/function 'test' (my function is test and stored in test.m and the file located in my current working directory).
If I place the file in C:\Temp\ and execute which test, I'm getting the complete file path (C:\Temp\test.m).
If I place the file in C:\Temp\MyMap\ and execute which test, I'm getting the complete file path('C:\Temp\MyMap\test.m') and additional comment %Has no license available.
If I use following one
if exist('test')
test(...)
end
It solves the issue. However, as mentioned previously, its a toolbox and contains many functions. I don't have time (and want to) apply the workaround on all the files/functions.
Any suggestion how this could be solved?
Question:
What would be the best way to import functions to tests that don't reside in the same directory?
Example
📁 src
📄 Get-Emoji.ps1
📁 test
📄 Get-Emoji.Tests.ps1
Inb4
Pester documentation[1] suggests test files are placed in the same directory as the code that they test. No examples of alternatives provided.
Pester documentation[2] suggests dot-sourcing to import files. Only with examples from within same directory
Whether breaking out tests from the src is good practice, is to be discussed elsewhere
Using Powershell Core for cross platform support on different os filesystems (forward- vs backward slash)
[1] File placement and naming convention
Pester considers all files named .Tests.ps1 to be test files. This is the default naming convention that is used by almost all projects.
Test files are placed in the same directory as the code that they test. Each file is called as the function it tests. This means that for a function Get-Emoji we would have Get-Emoji.Tests.ps1 and Get-Emoji.ps1 in the same directory. What would be the best way to referencing tests to functions in Pester.
[2] Importing the tested functions
Pester tests are placed in .Tests.ps1 file, for example Get-Emoji.Tests.ps1. The code is placed in Get-Emoji.ps1.
To make the tested code available to the test we need to import the code file. This is done by dot-sourcing the file into the current scope like this:
Example 1
# at the top of Get-Emoji.Tests.ps1
BeforeAll {
. $PSScriptRoot/Get-Emoji.ps1
}
Example 2
# at the top of Get-Emoji.Tests.ps1
BeforeAll {
. $PSCommandPath.Replace('.Tests.ps1','.ps1')
}
I tend to keep my tests together in a single folder that is one or two parent levels away from where the script is (which is usually under a named module directory and then within a folder named either Private or Public). I just dot source my script or module and use .. to reference the parent path with $PSScriptRoot (the current scripts path) as a point of reference. For example:
Script in \SomeModule\Public\get-something.ps1
Tests in \Tests\get-something.tests.ps1
BeforeAll {
. $PSScriptRoot\..\SomeModule\Public\get-something.ps1
}
Use forward slashes if cross platform compatibility is a concern, Windows doesn't mind if path separators are forward or backslashes. You could also run this path through Resolve-Path first if you wanted to be certain a valid full path is used, but I don't generally find that necessary.
I go into my project and run:
py.test -vvv .
And some of the results look ok:
my/test/path.py::some_test_name PASSED
And some look weird, like this:
my/test/path.py <- ../../../../../../../../../root/some_folder/my/test/path.py::some_other_test_name PASSED
This varies per test file, even if they are in the same folder. I don't see any notable difference between the code in those test files.
I have several Octave script files that run tests, named test_1, test_2, etc. I want to have a script file that will run all the tests, without having to switch all the test_n files to function files. I've tried several variations on this:
#!/path/to/octave -q
addpath('/path/to/directory/containing/all/scripts/');
source(test_1.m);
source(test_2.m);
but I always get "error: invalid call to script /path/to/directory/containing/all/scripts/test_1.m".
(I've tried source_file(), run(), and just having the filename alone on the line.)
Is there any way to run script files from a script file in Octave?
Try
source test_1.m
or
source('test_1.m')
instead.
Your syntax implies test_1 is a struct variable and you're trying to access a field called m
Same with the run command (in fact, run simply calls source under the hood).
You can also call the script directly, if it's on the path. You just have to make sure you don't include the .m extension, i.e.
test_1
test_2
Just put the name of the included script, without .m extension on a separate line.
Lets have for example script 1: 'enclosed.m'
and script 2: 'included.m'. Then enclosed.m should look like:
% begin enclosed.m
included; % sources included.m
% end encluded.m
Ok, so here's my issue. I have written a build script in bash that pipes output to tee and sorts different output to different log files (so I can summarize errors/warnings at the end and get some statistics on files built). I wanted to use the colorgcc perl script (colorgcc.1.3.2) to colorize the output from gcc and had found in other places that this won't work piping to tee, since the script checks if it is writing to something that is not a tty. Having disabled this check everything was working until I did a full build and discovered some of the code we receive from another group builds C dependency files (we don't control this code, changing it or the build process for these isn't really an option).
The problem is that these .d files have the form as follows:
filename.o filename.d : filename.c \
dependant_file1.h \
dependant_file2.h (and so on for however many dependencies there are)
This output from GCC gets written into the .d file, but, since it is close enough to a warning/error message colorgcc outputs color codes (believe it's the check for filename:lineno:message but not 100% sure, could be filename:message check in the GCCOUT while loop). I've tried editing the regex to attempt to not match this but my perl-fu is admittedly pretty weak. So what I end up with is a color code on each line for these dependency files, which obviously causes the build to fail.
I ended up just replacing the check for ! -t STDOUT with a check for a NO_COLOR envar I set and unset in the build script for these directories (emulates the previous behavior of no color for non-tty). This works great if I run the full script, but doesn't if I cd into the directory and just run make (obviously setting and unsetting manually would work but this is a pain to do every time). Anyone have any ideas how to prevent this script from writing color codes into dependency files?
Here's how I worked around this. I added the following to colorgcc to search the gcc input for the flag to generate the .d files and just directly called the compiler in that case. This was inserted in place of the original TTY check.
for each $argnum (0 .. $#ARGV)
{
if ($ARGV[$argnum] =~ m/-M{1,2}/)
{
exec $compiler, #ARGV
or die("Couldn't exec");
}
}
I don't know if this is the proper 'perl' way of doing this sort of operation but it seems to work. Compiling inside directories that build .d files no longer inserts color codes and the source file builds do (both to terminal and my log files like I wanted). I guess sometimes the answer is more hacks instead of "hey, did you try giving up?".