I have a number of logic unit-tests (where my project files have a target membership of the App and AppTests). I want to add a call to xcodebuild test-without-building to my build system so that my unit-tests run for each build.
However, the tests cannot run on the release build (because release doesn't build for testing).
Is my only choice to build both the release version and the debug version during my build, so that I can use the debug version only to perform the tests? That's very different and very much worse to the other test frameworks I've used (GTest, Catch). Why can't the tests stand on their own?
The test-without-building command is not actually meant to "run the tests without rebuilding the app", but rather it's supposed to be used in tandem with the build-for-testing command.
Please refer to the "Advanced Testing and Continuous Integration" WWDC 2016 session for more info.
The gist is: use build-for-testing to build an .xctestrun file, which is then used by test-without-building to run the tests. This is particularly useful to run big suites across different machines, although I have never done it myself.
Now that we have established that you can't use test-without-building on its own, the only option to run the test from the command line and if they pass build a Release version is to use xcodebuild test, which is going to build the app.
As for the why does it need to be in Debug, I don't have a precise answer. In iOS land, at least in the teams I worked with, the difference between Debug and Release builds is always only on things like the options passed to the compiler in terms of optimization, the architectures to build on, and the type of code signing.
This means that the code that runs in Debug vs Release is exactly the same, and since we already established that you'll need to build the app twice, one to run the tests, one to generate the releasable version, running the tests in Debug seems acceptable.
Related
I have a TeamCity build configuration that builds a C# project, runs some unit tests, and then does some extra things. My question is: Can I get information about my unit test run stored into build configuration variables (i.e. how many tests were run, how many were successful, how many failed, how many were skipped) so that I can then check these variables in a PowerShell script in later build steps and perform different actions depending on how many tests have passed?
AFAIK the best way is to ask these information directly to teamcity server using its REST API (pay attention, maybe the build locator could be a little be tricly to be found, if the build is still running).
By other hand, you can parse your NUnit test result file (or files if you run more than one NUnit test runner step in your build) inside your build agent machine.
I'm using C++98, and besides the standard library, I only have access to an old version of Boost (which thankfully has Boost Test). The documentation though is daunting, long, and I just don't know where to begin.
I have some experience doing unit testing in Java (and I'm looking for unit testing in C++), and I've seen test packages that contain the unit test code separate from the src package, and I also saw Where do you put your unit test? as well as Unit Testing with Boost and Eclipse. Their suggestions vary, and present reasoning for different packaging structures for separating test code from production code, or having them together.
Before I even started looking into Boost Test, I created this structure within Eclipse (perhaps erroneously):
-- ProjectName
|-- Debug
|-- src
|-- test
and I wrote another main method to run test functions. Eclipse didn't like that, because I had two main methods in the same project. I fumbled around through Project Properties and didn't find anything useful for separating my production code from test code when building (linking, really). My temporary fix was to just use g++ in the terminal and ad hoc compile just my "test" code.
I found something suggesting on Boost::Test -- generation of Main()? that Boost actually generated its own main method, so this is currently my plan of attack for unit testing, especially for having a library of testing tools already available.
What is the conventional way of organizing unit tests for C++?
How do I get started with Boost Test? (Boost is already installed)
Is there anything I need to change in Eclipse to be able to run my Boost unit tests separate from my production code within the IDE? (One of the nice things about IntelliJ, with Java, is how it'll automatically run any main method you like with a click) -- The goal here to be able to build and run my tests within Eclipse.
Should my tests be in a separate Eclipse project? (this was suggested in an answer to the second SO question I linked)
Edit: I found this article for an introduction to Boost Test, but it doesn't discuss how it can be handled within an IDE setting.
I figured out how to do this on my own, and I'll document my solution for others who are just starting with C++ as well and need to do testing of their code. There currently is no good introduction anywhere that I could find. Here are the resources that I used though (and found useful):
C++ Unit Testing With Boost Test which introduces Boost Test in a much better fashion than Boost's documentation.
Where do you put your unit test? which discusses the most conventional ways of doing tests with C++.
unit test in eclipse g++ which discusses Eclipse build configurations which allow testing
The C++ convention for testing is like that of other coding languages, just write your tests in a directory called test under the project. Using Boost Test requires that you link the unit test framework: -l boost_unit_test_framework which in Eclipse:
Right click on your project, go to Properties, C/C++ Build, Settings, Tool Settings, GCC C++ Linker, Libraries, and add the library name boost_unit_test_framework (add -mt to the name if you require multithreading; additionally, once the testing build configuration exists, you can go back and choose just that configuration to link the library -- it'll reduce the size of your executable for your other builds).
To be able to run the unit tests in Eclipse separate from your main method, we need to establish a new build configuration. That way, Eclipse knows to exclude your source file with a main method when you're executing your tests.
Click on Project, Build Configurations, Manage..., and select New... and call it Test (or something other than test). Choose your existing configuration so that we'll inherit properties from the production build.
Next, we need to differentiate the build configurations from one another so when we build them, they actually correspond to the production and test builds.
Right click on test, Resource Configurations, Exclude from Build..., and select the builds that represent your production build (i.e. Debug and or Release). Once done with that, right click on your source file with your main method, and exclude that from the Test build.
There's still some things we need to change. We don't have any test code yet, but we still can't run our test build, nor would our test build know about resources existing in src because Eclipse wouldn't automatically include those source files. They're practically invisible to your test code in test.
Right click on your project, go to Properties, C/C++ Build, Settings, Tool Settings, GCC C++ Compiler, Includes, and add the path /.../workspace/ProjectName.
Now to be able to run your test build in Eclipse, it needs to know what executable you're expecting the IDE to run.
Click on Run, Run Configurations..., and looking at your current run configuration, consolidate these settings by giving, for example, a debug build the name "Debug Build", the C/C++ Application "Debug/Artifact_Name", and the Build Configuration "Debug". Next, create a new run configuration, and call it something like "Test Build", set C/C++ Application to "Test/Artifact_Name", and ensure the build configuration is Test.
Now you'll be able to switch between running production code and test code by selecting either the "active" build configuration or running the correct run configuration.
Finally, here's an example of a unit test with Boost Test once all of this is set up:
//unit_tests.cpp
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE someModuleName
#include <boost/test/unit_test.hpp>
#include <src/some_object.h>
struct template_objects {
some_object x;
template_objects() {
BOOST_TEST_MESSAGE("Setting up testing objects");
}
~template_objects() {
BOOST_TEST_MESSAGE("Tearing down testing objects");
}
}
BOOST_FIXTURE_TEST_SUITE(testSuiteName, template_objects)
BOOST_AUTO_TEST_CASE(testCase1) {
x.update();
BOOST_CHECK(x.is_up_to_date());
}
BOOST_AUTO_TEST_CASE(testCase2) {
BOOST_CHECK(x.is_not_up_to_date());
}
BOOST_AUTO_TEST_SUITE_END()
This demonstrates some critical things about using Boost Test:
defining BOOST_TEST_DYN_LINK is recommended; you need to define a way of linking the testing framework before including any of Boost's libraries
you must name the "module" something, doesn't have to be the file name
to get automated setup up and teardown of objects before entering test cases, Boost Test has fixtures, which allow you to call an object's pre-existing state multiple times
the struct is what groups these fixtures, and it's implied that your object should have a well defined constructor and destructor for automatic scoping (if you didn't call new, you don't need delete in teardown)
test suites are just a way of logically grouping your test cases (I haven't tested it yet, but you can probably break up your suites into multiple files for better logical separation)
Additional tidbit: to make Boost Test more verbose, go to your run configuration for the test build, and add the argument --log_level=test_suite.
We are using Jenkins and SBT pluging to build, test and deploy our application. I see everywhere that people use "clean" command before "test" or "stage" which leads to very long compile times.
Is it necessary to clean everything for each build? What is the risk of using the incremental compiler for production builds?
I think it's a matter of taste. For continuous integration builds, it's ok to rely on the incremental compiler; if something fails I can investigate quite quickly.
But if, for whatever reason, the incremental compiler decides to ignore some changes (and it already happened to me in development), you may get some difficult bugs to resolve.
So, if you want something that's reproducible from a fresh code checkout; just run "clean" before everything else.
Most of the time, I configure jenkins with two build tasks: test builds, triggered by code changes, and packaging builds targeted for deployment and triggered periodically (or manually in some cases, with automatic deployment).
If the build script and/or command line to sbt have not changed then it should not be necessary to do clean.
I have been using Spark for several years - and it is one of the most complex sbt builds you can find. It works just fine to avoid doing clean as long as the build process itself (as embodied in the project/sbt build artifacts and the sbt command line) are the same as previous runs.
I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.
There are a ton of questions here on SO regarding NUnit vs. MSTest, and I have read quite a few of them. I think my question here is slightly different enough to post separately.
When I started to use C#, I never even considered looking at MSTest because I was so used to not having it available when I was using C++ previously. I basically forgot all about it. :) So I started with NUnit, and loved it. Tests were very easy to set up, and testing wasn't too painful -- just launch the IDE and run the tests!
As many here have pointed out, NUnit has frequent updates, while MSTest is only updated as often as the IDE. That's not necessarily a problem if you don't need to be on the bleeding edge of TDD (which I'm not), but the problem I was having with frequent updates is keeping all of the systems up-to-date. I use about four or five different PCs daily, and while updating all of them isn't a huge deal, I was hoping for a way to make my code compile properly on systems with an older version of NUnit. Since my project referenced the NUnit install folder, when I upgraded the framework, any computers with the older framework installed would no longer be able to compile my project. I tried to combat the problem by created a common folder in SVN that had just the NUnit DLLs, but even then it would somehow complain about the version number of the binary. Is there a way to get around this issue? This is what made me stop using the first time.
Then one day I remembered MSTest, and decided to give it a try. I loved that it was integrated into the IDE. CTRL-R,CTRL-A, all tests run. How simple! But then I saw that the types of tests available in MSTest were pretty limited. I didn't know how many I'd actually really need, but I figured I should go back to NUnit, and I did.
About now I was starting to have to debug unit tests, and the only way I could figure out how to do it in NUnit was to set NUnit as the startup application, then set breakpoints in my tests. Then in the NUnit GUI, I would run the tests to hit the breakpoints. This was a complete PITA. I then looked at the MSTest GUI again, and saw that I could just click Debug there and it would execute my tests! WOW! Now that was the killer feature that swayed me back in favor of MSTest.
Right now, I'm back using MSTest. Unfortunately, today I started to think about daily builds and did some searching on Tinderbox, which is the only tool I had heard of before for this sort of thing. This then opened up my eyes to other tools like buildbot and TFS. So the problem here is that I think MSTest is guaranteed to lock me into TFS for automated daily builds, or continuous integration, or whatever the buzzword is. My company can't afford to get locked into MS-only solutions (other than VS), so I want to examine other choices.
I'm perfectly fine to go back to NUnit. I'm not thrilled about rewriting 100+ unit tests, but that's the way it goes. However, I'd really love for someone to explain how to squash those two issues of mine, which in summary are:
how do I setup NUnit and my project so that I don't have to keep upgrading it on every system to make my project build?
how do I get easier debugging of unit tests? My approach was a pain because I'd have to keep switching between NUnit and the default app to test / run my application. I saw a post here on SO that mentioned NUnitIt on codeplex, but I haven't any experience with it.
UPDATE -- I'm comparing stuff in my development VM, and so far, NUnitit is quite nice. It's easy to install (one click), and I just point it to whatever NUnit binaries are in my SVN externals folder. Not bad! I also went into VS -> Tools -> Options -> Keyboard and changed my mapping for CTRL-R,CTRL-A to map to NUnitit.Connect.DebugGUI. Not perfect since I haven't figured out how to make NUnit automatically run the tests when it's opened, but it's pretty good. And debugging works as it should now!
UPDATE #2 -- I installed TestDriven.Net and gave it a quick run through. Overall, I like it a lot better than NUnitit, but at the moment, NUnitit wins because it's free, and since it also works with NUnit, it will allow me to "upgrade" to TestDriven.Net when the time comes. The thing I like most about TestDriven.Net is that when I double click on the failed test, it takes me right to the line in the test that had failed, while NUnit + NUnitit doesn't seem to be capable of this. Has anyone been able to make this link between the NUnit GUI and the VS IDE happen?
Many projects I've worked on have included a copy of the specific version of NUnit (or xUnit.net, whatever) in a "lib" or "extrernal" or "libraries" folder in their source control, and reference that location for building all of their tests. This greatly reduces the "upgrade everyone" headache, since you really don't need to install NUnit or xUnit.net to use it.
This approach will still let you use something like TestDriven.Net to execute the tests, run the tests in a debugger, etc.
For easier debugging (and running, too) of unit tests I recommend checking out TestDriven.Net. The "Test With > Debugger" feature is so handy. The personal version is free.
Have you played with the "Specific Version" property on the NUnit.framework reference? We keep ours set to true so that the tests that are coded for a given nunit version require that specific version to execute.
I'm not sure how it will handle, for example, if you had 2.5 on your machine but another machine only had 2.4 - would .NET bind to the 2.4 version happily or will it only bind from earlier versions to later versions of an assembly (e.g. compiled against 2.4, but 2.5 availale at runtime?)