iPhone app: scale of testing unit in unit test [closed] - iphone

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
When I was looking into unit testing for iPhone projects, I found that it is hard to decide the scale of unit in unit testing, so if I have three methods A, B and C, I can test each of them, but sometimes you need to call A before B in order to make B making sense, for example, if I have addImageWithName: and removeImageWithName:, then I need to first add an image, in order to test if removeImageWithName: really works.
So it is the decision between black box single method test or functional test(functional means a function of the application which may involve more than one method), if the time is tight then I cannot go with both of them, so what is the pros and cons of these two approaches?
What I can think of:
=== single method test ===
pros:
- easy to write test case, as you only need to deal with input/output of individual methods
cons:
- methods need to highly decoupled, so one method does not rely on another
- sometimes impossible for example the undo method has to rely on a 'do' method.
=== functional test ===
pros:
- higher level than per method test, as this targets at functions of the app
cons:
- not easy to write test case, if the function is complicated
- may not cover all the cases for each individual method involved in a particular function
So what should be the correct decision?
Thanks !

Single method test it the best way to write unit test in xcode. Anyway if your function depends on another function to complete the you can use asynchronous unit test, Use GHUnit test framework for testing the async methods.BTW: what you are using, OCUnit or GHUnit for testing?hope this help

Related

Do tagless algebra needs laws? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I read the wonderful blog from JOHN A DE GOES regarding to tagless final. In the section 5.Fake Abstraction, he has mentioned:
Unfortunately, these operations satisfy no algebraic laws—none
whatsoever! This means when we are writing polymorphic code, we have
no way to reason generically about putStrLn and getStrLn.
For all we know, these operations could be launching threads, creating
or deleting files, running a large number of individual side-effects
in sequence, and so on.
He is correspond to the following tagless algebra:
trait Console[F[_]] {
def putStrLn(line: String): F[Unit]
val getStrLn: F[String]
}
Does it mean, writting laws for tageless algebra is not possible or do I misunderstand something.
A few things:
John A De Goes, while is very knowledgeable has also a lot of opinions and express them as if they were inferred from mathematics without making a clear distinction - this posts is a part of series where he basically pitches that tagless final is often a bad solution and ZIO is a good one
paragraph says that tagless final often doesn't follow algebraic laws which means that we cannot e.g. consider IO monid/semigroup and similar. Which is true. But it doesn't mean that these constructs cannot obey some contracts (called laws) because the do and that is the whole point of Cats Effect
nobody can force you to write laws for algebras, because laws are basically some particular way of writing specification/tests where you write a separate test for some class of interfaces and then for every implementation you can instantiate this test to check if your implementation fulfill contracts - and yes, nobody can force you to write test for your code. However, that can be said about virtually everything we code, and TTFI give you benefit of making it easier to specify a common behavior of widely different implementations, and then writing your code and tests carefully, sticking to the part of contract that is vital for a particular piece of code while also making these dependencies on contracts explicit
So yes, nobody can force you to write laws for your algebras, but people who implement them in libraries actually do this, and if you write your own algebras, you are encouraged to do so, so this argument is stretched and eristic.

How to implement db operation in scala functional programming [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm quite new to Scala and functional programming. I have read that we are not supposed to make any side effect(Eg: DB and IO operation) in FP. I'm wondering how can we handle DB operation in Scala?
If you want to create a purely functional app, you can't do any side effects, but without side effect how can we do anything useful (write text to console, read data from the database, etc.)?
Basically, what we can do is "cheating" by wrapping all code that is not pure (is performing any side effects) in effect which is usually called IO monad. Impure actions wrapped with IO are not executed until explicitly started (usually by calling method named like unsafeRun). And since that wrapped actions are just values, you can return them from functions, assign to variables and do everything you would do with plain values:
import cats.effect.IO //you'd have to add cats-effect dependency to make this import work
val printHelloToConsole = IO(println("Hello")) //nothing is happening yet
printHelloToConsole.unsafeRunSync // starting performin effects
The main purpose of that action is an attempt to separate pure, functional code from impure parts of the application. Quote from Martin Odersky:
The IO monad does not make a function pure. It just makes it obvious that it’s impure.
There are several implementations of IO Monad for Scala: ZIO, Cats-Effect, Monix. For pure functional database communication, you can use Doobie which works with any of these monads.
I would recommend you to watch that talk from John de Goes FP to the max, it explains very well what is IO monad and how to use it.

Common behavior in class-based unit test: inheritance or shared test fixtures? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have several test classes to write, that I expect to have quite similar Setup and Teardown code (basically the same methods, but with different arguments).
From what I understand of the unittest package, it seems that there are two methods for doing such:
Create a base class inheriting from matlab.unittest.TestCase class, with the common code for Setup and Teardown. Then all my actual test cases classes would extend it.
Using the shared test fixtures pattern documented here: the fixture would contain common setup/teardown code and be referenced in all test cases classes
After reading Andy Campbell answer, I think it is worth giving some more context to the usecase and our TestSuite organization: the software we are testing in these tests can only be used throughout a simulation bench, meaning this is not actually speaking unit-testing in the sense that we do not test one function at a time. Our testing strategy is as follows:
Play a simulation scenario with our simulator
Record simulation output (in a file or in workspace)
Launch unit-tests on previously recorded simulation output
At the end of the test sequence, publish additional results about the simulation
Given that simulation are computationally expensive, we cannot afford to launch the simulation before each unittest. On the other hand, as we have one test class for each simulation scenario, the setup/teardown code shall be applied for each test class.
Which one would you advocate for ? The first one seems to me more natural and easier to understand, but I feel like it is maybe not the canonical way to do so. On the other hand, I am not sure to completely grasp the notion of shared fixture.
There is a semantic difference between shared test fixtures and the setup/teardown code you place into a class and shared via a base class.
First of all, is the setup/teardown code expensive? In other words does it take a long time to execute or have some other reason why executing it fewer times is better? If the setup code is not expensive, then it is indeed better to put it into a TestMethodSetup method and share it via inheritance. This will allow you to share the code required but it will not share the actual fixture. That is, each test will setup and teardown its own fresh fixture. This is great to help ensure independence of tests but is only feasible if the setup is not expensive because it happens every time.
If the fixture is expensive to setup/teardown you will not only want to share the code required to setup/teardown the fixture, but you may also want to share the actual instance of the fixture. That is, you may want to only set it up and tear it down once for all tests that you will run. They will share the same fixture instance itself (in the literature this is known as a shared fixture). If you use a shared fixture, you can still put it in a base class and derived test classes will benefit from its use, but the difference is that the fixture can be shared across test class boundaries. That is if you have something like the following:
MySharedTestFixture.m
classdef MySharedTestFixture < matlab.unittest.fixtures.Fixture
methods
function setup(fixture)
disp('Setting up expensive fixture');
end
function teardown(fixture)
disp('Tearing down expensive fixture.');
end
end
end
BaseTest.m
classdef (Abstract, SharedTestFixtures={MySharedTestFixture}) ...
BaseTest < matlab.unittest.TestCase
end
FooTest.m
classdef FooTest < BaseTest
methods(Test)
function testSomething(testCase)
end
end
end
BarTest.m
classdef BarTest < BaseTest
methods(Test)
function testSomethingElse(testCase)
end
end
end
Observe the output of the test run:
>> runtests
Setting up MySharedTestFixture
Setting up expensive fixture
Done setting up MySharedTestFixture
__________
Running BarTest
.
Done BarTest
__________
Running FooTest
.
Done FooTest
__________
Tearing down MySharedTestFixture
Tearing down expensive fixture.
Done tearing down MySharedTestFixture
__________
You can see that the expensive shared fixture was only setup and torn down once even across classes.
You can use TestClassSetup/Teardown methods to share setup/teardown code across all tests in a given class, but these will not share across different tests.
Also, if you want to make a canonical fixture, perhaps because it is something you commonly need to do, you can create it as an official Fixture and then decide in specific contexts where you would like them to apply and how widely you would like them to be shared. For example, for the MySharedTestFixture class above, I can share it across classes as a SharedTestFixture, but I can also share it across just the tests in a given class by calling applyFixture in a TestClassSetup method. Similarly, I can call applyFixture in a TestMethodSetup method to create the fixture as a fresh fixture for all tests in the class, and finally, if I just want to use the fixture as a one off in one test method, I can easily just call applyFixture on it in the test method that requires it. In this case, the shared code is separable from the degree to which the actual instance is to be shared.
Hope that helps!

Matlab function command [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Assume that I have a subfunction seen in the below. What is the difference between these two
function a=b(x,y)
.
.
.
a=output
and
function b(x,y)
......
if I write it in second form how can I define it main function and how can I see its outputs.
Another question,
I found a code from here (http://www.mathworks.com/matlabcentral/fileexchange/21443-multiple-rapidly-exploring-random-tree--rrt-) including a function like:
%% SetObstacleFilename
function SetObstacleFilename(self,value)
if isa(value,'char')
self.obstacleFilename = value;
self.GenerateObstacles();
end
end
how can I use it in my main function? Moreover what is self.GenerateObstacles() command? There is no equality on it?
I think I see how both of your questions are related to the same thing. You really should've asked something along the lines of:
I always saw MATLAB functions written in the form function a=b(x,y), however recently I came across some code which included functions in the form function b(x,y) (e.g. function SetObstacleFilename(self,value)).... so what's up with that?
In order to understand the 2nd type of functions, you need to consider object-oriented programming (OOP).
The code example you found is taken from within a MATLAB class. Class-related functions are known in OOP as "methods", and this specific code in another programming language would take the shape of a void return type function\method.
Now consider the term object that refers to an instance of a class.
Traditionally, methods are limited to a single output. For this reason, some methods are designed to operate on objects (actually pointers, AKA "passing by reference") such that returning a value is not necessary at all, because the input objects are directly manipulated. Other cases when methods don't need to return anything may include functions that have some "utility" functionality (e.g. initialize something, plot something, output something to the console etc. - just like the self.GenerateObstacles() method you mentioned).
Regarding your other questions:
The self in SetObstacleFilename(self,value) looks like an instance of the considered class.
Usually to use class methods you need to instantiate an object using a constructor (which should be a function with the same name of the class), unless these methods are static.
To conclude - above are just the most fundamental basics of OOP. I won't attempt to teach you the whole OOP Torah while standing on one leg, so I am providing some additional materials below, should you be interested to further your understanding of the topic.
Hopefully, what's going on is a bit clearer now!
Here are some resources for you:
MATLAB's OOP Manual.
MATLAB's documentation on OOP.

How should I use the results of Devel::Cover? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
How should I use the results of Devel::Cover to make changes in the code? What do I do next with my code?
Use Devel::Cover to identify which parts of your code have not been exercised by your tests. If some parts of your code are not covered by your tests, you typically would add more tests to cover all of your code.
In some cases, Devel::Cover will identify parts of your code which can not be tested. If that is the case, you may decide to delete that part of your code.
Structural coverage is a metric of how thoroughly your code has been exercised. It's normally collected while running tests and thus provides an approximation of the completeness of your test suite.
Incomplete coverage means that you have functionality that isn't being exercised and thus can't be being tested. Normally you would add more tests to increase the coverage. Missed coverage can also be an indication of unnecessary functionality (which can be removed) or logical errors that prevent full exercise of the code. It's up to you to analyze your coverage reports and determine which course of action is appropriate.
Note that "covered" just means "executed." It is not the same as "tested" and definitely not the same as "correct." I recommend setting the flags to Devel::Cover (specifically ignore, inc, and select) so you collect coverage data only for the module actively under test. This reduces the risk of incidental coverage of untested code.