I have this chunk of code which display a question box with three choices:
Yes
Save as
No
I just want to create a unit test for each of those choices, and I need to interact with the box for it.
When the box is created, every matlab process is frozen until the user react. Of course my tests are automated, and there's no user present.
Is there a solution to send a "Yes" event to Matlab ?
I could use autoit but I would rather avoid that
Thanks
Currently, you're best bet is to structure your code to not depend on the questdlg function directly but to wrap an interface around the function so that you can inject a test specific dependency, or "mock". This might look something like the following:
Source Code Under Test
function codeWhichUsesQuestDlg(dlgProvider)
validateattributes(dlgProvider, {'DialogProvider'},{});
if (needToAskQuestion)
result = dlgProvider.provideQuestionDialog(...)
else
...
end
Source Code Interface/Infrastructure
classdef DialogProvider < handle
methods(Abstract)
result = provideQuestionDialog(varargin);
% perhaps others? errordlg, etc?
end
end
Production Implementation
classdef ProductionProvider < DialogProvider
methods
function result = provideQuestionDialog(varargin)
result = questdlg(varargin{:});
end
end
end
Production Usage
>> codeWhichUsesQuestDlg(ProductionProvider)
Test Usage
classdef TestCode < matlab.mock.TestCase
methods(Test)
function testCode(testCase)
% Create the test specific "mock" implementation
[mockProvider, behavior] = testCase.createMock(?DialogProvider);
% Define mock behavior needed for the test
when(withAnyInputs(behavior.provideQuestionDialog), ...
AssignOutputs('Yes'));
% Call code under test with the mock
codeWhichUsesQuestDlg(mockProvider);
% Verify correct result/behavior
...
end
end
end
Take a look at the mocking framework to see more info. Also you might be interested in this post covering the mocking framework as well as this post on dependency injection.
Related
I would like to test datasets with a variable number of values. Each value should be tested and I would like to have a standardized output that I can read in afterward again. My used framework is Matlab.
Example:
The use case would be a dataset which includes, e.g., 14 values that need to be testet. The comparison is already completely handled by my implementation. So I have 14 values, which I would like to compare against some tolerance or similar and get an output like
1..14
ok value1
ok value2
not ok value3
...
ok value14
Current solution:
I try to use the unit-testing framework and the according TAPPlugin that would produce exactly such an output (tap), one for every unit-test. My main problem is that the unit-testing framework does not take any input parameters. I already read about parametrization, but I do not know how this helps me. I could put the values as a list into the parameter, but how to I pass them there? Afaik the unit-test class does not allow additional parameters during initialization, so I cannot include this in the program the way I want.
I would like to avoid to need to format the TAP output on my own, because it is already there, but only for unit-test objects. Unfortunately, I cannot see how to implement this wisely.
How can I implement the output of a test anything protocol where I have a variable amount of comparisons (values) in Matlab?
If you are using class based unit tests you could access its properties from outside the test.
So let's say you have following unit test:
classdef MyTestCase < matlab.unittest.TestCase
properties
property1 = false;
end
methods(Test)
function test1(testCase)
verifyTrue(testCase,testCase.property1)
end
end
You could access and change properties from outside:
test=MyTestCase;
MyTestCase.property1 = true;
MyTestCase.run;
This should no succeed since you changed from false to true. If you want to have a more flexible way, you could have a list with the variables and a list with the requirements and then cycle through both in one of the test functions.
properties
variables = [];
requirements = [];
end
methods(Test)
function test1(testCase)
for i = 1:length(variables):
verifyEqual(testCase,testCase.variables[i],testCase.requirements [i])
end
end
end
Now you would set variables and requirements:
test=MyTestCase;
MyTestCase.variables = [1,2,3,4,5,6];
MyTestCase.requirements = [1,3,4,5,5,6];
MyTestCase.run;
Please note, that in theory you should not have multiple assert statements in one test.
I am writing a class-based test suite in MATLAB for a timeseries handling package. The first test in my suite needs to check whether a connection exists to a Haver database on a network drive. If the connection does not exist, then the first test should abort the rest of the suite using one of the fatalAssert methods.
One complicating factor, which I have excluded from the exposition below, but I will mention now is that I need to use an anonymous function to check the connection to Haver (unless someone has a better idea). My package handles data from multiple sources, Haver only being one of them.
I have a parent-class test suite that performs general tests for all of the sources. I then inherit this parent-class into specific child-class test suites and set specific parameters in their respective TestMethodSetup method. One of these parameters is an anonymous function, connfun, and a location, connloc, which I use in the parent-class to test the connection. The reason I do this is because the parent tests are executed first, so I would have to wait for all of those to end if I wanted to test the connection in the child class.
This also complicates the order of execution. If I want to assign the connfun in the child class, then I have to use either the TestMethodSetup or TestClassSetup of the child class (open to recommendations on which is best here) and put this connection test in the Test method of the parent class. I noticed the if I put checkConn in the TestMethodSetup and TestClassSetup of the parent class was running before that of the child class, I was unable to pass the anonymous function and the test would be incomplete.
Putting the previous point aside for a moment, this was my first attempt at writing the test in the parent-class (note that I used a fatalAssertEqual instead of a fatalAssertTrue because isconnection() does not return a logical):
methods (Test)
function checkConn(testCase)
connloc = 'pathToHaverDatabase';
connfun = #(x) isconnection(haver(x));
testCase.fatalAssertEqual(connfun(connloc), 1);
end
end
The above works when there is a connection, but the problem that I bumped into with this is that when I cannot access connloc, an error ocurrs during the call to haver(). So instead of returning a 1 or 0 from the isconnection() call that I can fatalAssertEqual on, all of checkConn errors out due to haver(). This then leads to the rest of the tests running (and failing, which is exactly what I want to avoid).
My next idea works for both cases, but it feels like bad code, and does not have the anonymous function specification described above.
methods (Test)
function checkConn(testCase)
connloc = 'pathToHaverDatabase';
connfun = #(x) isconnection(haver(x));
try
isconn = connfun(connloc);
catch
isconn = 0;
end
testCase.fatalAssertEqual(isconn, 1)
end
end
When I wrote this, I did not necessarily want to distinguish between not having access to the network drive, not being able to call the haver() function, and getting an isconnection equal to 0 because the last case covers all three. But I realized that if I did differentiate them, then it would be a bit more robust, but it's still missing the anonymous function taht I could pass from child to parent.
properties
connloc = 'pathToHaverDatabase';
end
methods (Test)
function checkDrive(testCase)
isfound = fillattrib(testCase.connloc);
testCase.fatalAssertTrue(isfound);
end
function checkHaver(testCase)
try
hav = haver(testCase.connloc);
ishaver = ~isempty(hav);
catch
ishaver = false;
end
testCase.fatalAssertTrue(ishaver);
end
function checkConn(testCase)
connfun = #(x) isconnection(haver(x));
testCase.fatalAssertEqual(connfun(testCase.connloc), 1);
end
end
Ideally, what I would want is a fatalAssert method (or something similar) that ends the test suite when its input is an error. Something that would perhaps be called fatalAssertNotError, but I don't think that exists. If it did, the last line of my first function would simply be testCase.fatalAssertNotError(connfun(connloc)) and I would not have to worry about all the cases.
I'm very open to dynamic rewrite of this whole test setup, so any specific comments or general advice are welcome!
First of all, I think the fatalAssert case is a strong use case to provide something like fatalAssertNotError. One reason why it is not part of the package is because many/most times people don't want to check whether something doesn't error, they just want to call the code, and if it errors, it fails for the test author automatically and it is much simpler. However, other qualification types like fatal assertions and assumptions perhaps point to the need to provide this so you can choose the outcome of the test in the presence of an error, in cases where you don't want it to fail (like with assumptions) or you want it to fail "more strongly" like with fatal assertions.
That being said, I am still not convinced that you can't achieve what you are ultimately trying to do without it. The question I have centers around why you can't use TestClassSetup. It is not clear to me exactly why you weren't able to customize in the derived test class to get the behavior you want. For example, does something like this work?
classdef BaseTest < matlab.unittest.TestCase
properties(Abstract)
connloc
connfun
end
methods(TestClassSetup)
function validateConnection(testCase)
% If this errors it behaves like an assertion (not fatal assertion)
% and fails all tests in the test class. If it doesn't error but
% doesn't return 1 then the assertion failure will occur.
testCase.assertEqual(connfun(connloc), 1,
'Could not establish a connection to the database');
end
end
end
classdef DerivedTest < BaseTest
properties
connloc = 'pathToHaverDatabase';
connfun = #(x) isconnection(haver(x));
end
methods(Test)
function testSomething(testCase)
% Have at least one test method to test it out
end
end
end
Hope that helps!
If you really want to use a function you can define a nested one like this:
methods (Test)
function checkConn(testCase)
connloc = 'pathToHaverDatabase';
function res = connfun(x)
try
res = isconnection(haver(x));
catch
res = false
end
end
testCase.fatalAssertEqual(connfun(connloc), 1);
end
end
Nested functions can be a bit confusing to me because of the way they share data with the parent function. There really is no difference between an anonymous function and a nested function.
The alternative is to put the function at the end of the file, outside the classdef block:
classdef ...
%...
methods (Test)
function checkConn(testCase)
connloc = 'pathToHaverDatabase';
function res = connfun(x)
try
res = isconnection(haver(x));
catch
res = false
end
end
testCase.fatalAssertEqual(connfun(connloc), 1);
end
end
%...
end
function res = connfun(x)
try
res = isconnection(haver(x));
catch
res = false
end
end
But I honestly don't understand why you need to have a function call within fatalAssertEqual. The code you have seems perfectly fine to me.
I inherited a code base in matlab, which I like to put under unittest with the matlab.unittest framework.
To make the code base more robust against arbitrary addpath of my users, I have put most of the code into +folders like a toolbox. So the general layout is:
+folder1/file1.m
+folder1/runtestsuite.m
+folder1/unittest_data/file1_testdata.mat
+folder1/+folder2/file2.m
+folder1/+folder2/unittest_data/file2_testdata.mat
...
and updated all internal references with the correct import statements.
Now, I like to add a unittest for file1.m. However if I put a file in +folder1/file1_test.m file1.m seems not to be visible.
Here is my example code of file1_test.m
classdef file1_test < matlab.unittest.TestCase
properties
path
end
methods(TestMethodSetup)
function setunittestdatapath(testCase)
p = mfilename('fullpath');
[directory,~,~]=fileparts(p);
testCase.path = fullfile(directory,'unittest_data');
end
end
methods (Test)
function file1_input(testCase)
%import folder1.file1
testdata = load(fullfile(testCase.path),'file1_testdata.mat');
result = file1(testdata.input);
testCase.verifyEqual(result, testdata.output);
end
end
end
If I uncomment the import statement the unittest works fine. So currently I have to add all import statements to each individual test, which I like to avoid. Is there a more elegant way for doing something like this?
I tried importing it at the beginning of the file, although matlab complains "Parse error at CLASSDEF: usage might be invalid MATLAB syntax." this also works. So what is the correct and most pragmatically way for doing something like this?
import statements only apply to the local scope of where they are used so if you want a function to be able to not use the full-qualified name, then you'll have to add the import statement to each function separately.
The import list scope is defined as follows:
Script invoked from the MATLAB® command prompt — Scope is the base MATLAB workspace.
Function, including nested and local function — Scope is the function and the function does not share the import list of the parent function. If the import list is needed in a MATLAB function or script and in any local functions, you must call the import function for each function.
For unit tests though, I would argue that it is probably best to use the fully-qualified function name every time (rather than relying on import) so that it's clear to the user what you're testing.
result = folder1.file1(testdata.input)
Currently import statements in MATLAB have function scope as mentioned in the answer by Suever.
However, I often use local functions as a workaround to mimic a file level import:
classdef file1_test < matlab.unittest.TestCase
properties
path
end
methods(TestMethodSetup)
function setunittestdatapath(testCase)
p = mfilename('fullpath');
[directory,~,~]=fileparts(p);
testCase.path = fullfile(directory,'unittest_data');
end
end
methods (Test)
function file1_input(testCase)
%import folder1.file1
testdata = load(fullfile(testCase.path),'file1_testdata.mat');
result = file1(testdata.input);
testCase.verifyThat(result, IsEqualTo(testdata.output));
end
end
end
% Include file level "import" functions below
function f = file1(varargin)
f = folder1.file1(varargin{:});
end
function c = IsEqualTo(varargin)
c = matlab.unittest.constraints.IsEqualTo(varargin{:});
end
Note in this example I "imported" both your source code as well as some of the test framework source code in order to use the literate form of verifyEqual using verifyThat. Note this is the same functional behavior, but in general there exists more functionality with the constraints than with the qualification methods so this may be helpful to you at some point.
A program I am working on performs calculations that involve objects that can only have several possible sets of values. These parameter sets are read from a catalogue file.
As an example say the objects are representing cars and the catalogue contains a value set {id: (name, color, power, etc.)} for each model. There are however many of these catalogues.
I use Matlab's unittest package to test if the calculations fail for any of the property combinations listed in the catalogues. I want to use this package, because it provides a nice list of entries that failed. I already have a test that generates a cell array of all ids for a (hardcoded) catalogue file and uses it for parameterized tests.
For now I need to create a new class for each catalogue file. I would like to set the catalogue file name as a class parameter and the entries in it as method parameters (which are generated for all class parameters), but I cannot find a way to pass the current class parameter to the local method for creating the method parameter list.
How can I make this work?
In case it is important: I am using Matlab 2014a, 2015b or 2016a.
I have a couple thoughts.
Short answer is no this can't be done currently because the TestParameters are defined as Constant properties and so can't change across each ClassSetupParameter value.
However, to me creating a separate class for each catalogue doesn't seem like a bad idea. Where does that workflow fallover for you? If desired you can still share code across these files by using a test base class with your content and an abstract property for the catalogue file.
classdef CatalogueTest < matlab.unittest.TestCase
properties(Abstract)
Catalogue;
end
properties(Abstract, TestParameter)
catalogueValue
end
methods(Static)
function cellOfValues = getValuesFor(catalog)
% Takes a catalog and returns the values applicable to
% that catalog.
end
end
methods(Test)
function testSomething(testCase, catalogueValue)
% do stuff with the catalogue value
end
function testAnotherThing(testCase, catalogueValue)
% do more stuff with the catalogue value
end
end
end
classdef CarModel1Test < CatalogueTest
properties
% If the catalog is not needed elsewhere in the test then
% maybe the Catalogue abstract property is not needed and you
% only need the abstract TestParameter.
Catalogue = 'Model1';
end
properties(TestParameter)
% Note call a function that lives next to these tests
catalogueValue = CatalogueTest.getValuesFor('Model1');
end
end
Does that work for what you are trying to do?
When you say method parameters I assume you mean "TestParameters" as opposed to "MethodSetupParameters" correct? If I am reading your question right I am not sure this applies in your case, but I wanted to mention that you can get the data from your ClassSetupParameters/MethodSetupParameters into your test methods by creating another property on your class to hold the values in Test[Method|Class]Setup and then referencing those values inside your Test method.
Like so:
classdef TestMethodUsesSetupParamsTest < matlab.unittest.TestCase
properties(ClassSetupParameter)
classParam = {'data'};
end
properties
ThisClassParam
end
methods(TestClassSetup)
function storeClassSetupParam(testCase, classParam)
testCase.ThisClassParam = classParam;
end
end
methods(Test)
function testSomethingAgainstClassParam(testCase)
testCase.ThisClassParam
end
end
end
Of course, in this example you should just use a TestParameter, but there may be some cases where this might be useful. Not sure if its useful here or not.
I am trying to implement a small example function in Matlab OOP.
The functioning code is:
classdef Cat < handle
properties
meowCount = 0;
end
methods
function obj = Cat() % all initializations, calls to base class, etc. here,
end
function Meow(obj)
disp('meowww');
obj.meowCount = obj.meowCount + 1;
end
end
end
I want to create something of the following sort akin to C++ as my real life function definitions are very big and I don't want to clutter my class definition:
classdef Cat < handle
properties
meowCount = 0;
end
methods
function obj = Cat() % all initializations, calls to base class, etc. here,
end
function Meow(obj);
end
end
%%
function Cat::Meow(obj)
disp('meowww');
obj.meowCount = obj.meowCount + 1;
end
So, basically write the definition of function Meow outside the class. How do I accomplish the above change ?
To play with the working first version you can use the following:
C = Cat;
C.meowCount
C.Meow
Create a folder called #Cat.
The within #Cat, put the following files:
Cat.m
classdef Cat < handle
properties
meowCount = 0;
end
methods
function obj = Cat()
end
Meow(obj) % this is optional, and just indicates the function signature
end
end
Meow.m
function Meow(obj)
disp('meowww');
obj.meowCount = obj.meowCount + 1;
end
Move out of the #Cat folder, and make sure it (or its parent folder) is on your path. Then try your examples.
If you use an # folder to contain your class like this, most methods (though not constructors, and not property get/set methods) can be moved to external files.
If you want, you can include a function signature with no implementation in the main classdef file. This is sometimes optional, but if you wish to change the Access level of the method away from default, it is necessary.
There are two ways of defining methods of a class. The newer, more portable way, is by defining them within the same classdef file. You can also write methods as separate M-file functions and put them in a #MyClass folder. Note that some methods must be in the classdef file. You can still define your separate file methods as static and private via helper functions. This is a bit of hack, which is why it's a good idea to put everything in the classdef file unless you have a very large project.
The best way to deal with clutter in your classdef file is to use code-folding. You can collapse individual methods and entire method blocks. In this way you can easily organize your classdef file to be as uncluttered as possible by grouping related methods together in the same methods block. Collapse any methods/blocks you aren't using at the time.
Additionally, you can use the "Go To" button in the Editor Ribbon Tab to select a specific method to view (if they are all defined in the same file).
Writing your methods in separate files may seem like a good solution at first, but if you have a class with many methods, it becomes extremely cumbersome to have many files open at once. Unlike C++, you can only define one method per file. It really ends up being quite a mess.
See Also:
Code Folding — Expand and Collapse Code Constructs
Editor/Debugger Code Folding Preferences