Zend Session Handler Db Table being destructed during PHPUnit test - zend-framework

I've been doing unit testing and I ran into this weird bad problem.
I'm doing user authentication tests with some of my services/mappers.
I run, all together about 307 tests right now. This only really happens when I run them all in one batch.
I try to instantiate only one Zend_Application object and use that for all my tests. I only instantiate it to take care of the db connection, the session, and the autoloading of my classes.
Here is the problem.
Somewhere along the line of tests the __destruct method of the Zend_Session_SaveHandler_DbTable gets called. I have NO IDEA WHY? But it does.
The __destruct method will render any writing to my session objects useless because they are marked as read-only.
I have NO clue why the destruct method is being called.
It gets called many tests before my authentication tests. If I run each folder of tests individually there is no problem. It's only when I try to run all 307 tests. I do have some tests that do database work but my code is not closing the db connections or destructing the save handler.
Does anyone have any ideas on why this would be happening and why my Zend_Session_SaveHandler_DbTable is being destructed? Does this have anything to do with the lifetime that it has by default?

I think that what was happening is that PHPUnit was doing garbage collection. Whenever I ran the 307 tests the garbage collector had to run and it probably destroyed the Zend_Session_SaveHandler_DbTable for some reason.
This would explain why it didn't get destroyed when fewer tests were being run.
Or maybe it was PHP doing the garbage collection, that makes more sense.
Either way, my current solution is to create a new Zend_Application object for each test class so that all the tests within that class have a fresh zend_application object to work with.
Here is some interesting information.
I put an echo statement in the __destruct method of the savehandler.
The method was being called ( X + 1 ) times, where X was the number of tests that I ran. If I ran 50 test I got 51 echos, 307 tests then 308 echos, etc.
Here is the interesting part. If I ran only a few tests, the echos would all come at the END of the test run. If I tried to run all 307 tests, 90 echos would show up after what I assumed were 90 test. The rest of the echos would then come up at the end of the remaining tests. The number of echos was X + 1 again, or in this case 308.
So, this is where I'm assuming that this has something to do with either the tearDown method that PHPUnit calls, or the PHP garbage collector. Maybe PHPUnit invokes the garbage collector at teardown. Who knows but I'm glad I got it working now as my tests were all passing beforehand.
If any of you have a better solution then let me know. Maybe I uncovered a flaw in my code, phpunit, or zend, that hadn't been known before and there is some way to fix it.

It's an old question, but I have just had the same problem and found the solution here. I think it's the right way to solve it.
Zend_Session::$_unitTestEnabled = true;

Related

VSTS Test fails but vstest.console passes; the assert executes before the code for some reason?

Well the system we have has a bunch of dependencies, but I'll try to summarize what's going on without divulging too much details.
Test assembly in the form of a .dll is the one being executed. A lot of these tests call an API.
In the problematic method, there's 2 API calls that have an await on them: one to write a record to that external interface, and another to extract all records and then read the last one in that external interface, both via API. The test is simply to check if writing the last record was successful in an end-to-end context, that's why there's both a write and then a read.
If we execute the test in Visual Studio, everything works as expected. I also tested it manually via command lining vstest.console.exe, and the expected results always come out as well.
However, when it comes to VS Test task in VSTS, it fails for some reason. We've been trying to figure it out, and eventually we reached the point where we printed the list from the 'read' part. It turns out the last record we inserted isn't in the data we pulled, but if we check the external interface via a different method, we confirmed that the write process actually happened. What gives? Why is VSTest getting like an outdated set of records?
We also noticed two things:
1.) For the tests that passed, none of the Console.WriteLine outputs appear in the logs. Only on Failed test do they do so.
2.) Even if our Data.Should.Be call is at the very end of the TestMethod, the logs report the fail BEFORE it prints out the lines! And even then, the printing should happen after reading the list of records, and yet when the prints do happen we're still missing the record we just wrote.
Is there like a bottom-to-top thing we're missing here? It really seems to me like VSTS vstest is executing the assert before the actual code. The order of TestMethods happen the right order though (the 4th test written top-to-bottom in the code is executed 4th rather than 4th to last) and we need them to happen in the right order because some of the later tests depend on the former tests succeeding.
Anything we're missing here? I'd put a source code but there's a bunch of things I need to scrub first if so.
Turns out we were sorely misunderstanding what 'await' does. We're using .Wait() instead for the culprit and will also go back through the other tests to check for quality.

How do I "slow down" executing of tests in scala (transcation doesn't close in AfterExample)?

After upgrading specs/specs2 to the newest specs2 stable version I found out really weird issue in my scala tests. In my "main" class (all of my test classes extend this one) I have before and after
abstract class BaseSpec2 extends SpecificationWithJUnit with BeforeExample with AfterExample {
def before = startSqlTransaction
def after = rollbackSqlTransaction
[...]
}
before starts transaction and after ends it (I dont think I have to put code to show you how it works, If I am wrong please let me know).
I'm using JUnit in Eclipse to execute my tests in scala.
When I'm running them I get SqlError in some of them (the result of tests are not stable, what I mean is sometimes it ends up with success for one test, sometimes when this test won't go without error another will):
Deadlock found when trying to get lock; try restarting transaction
I think this error shows up, because before starts transaction before every tests but after does not close it for some reason. I debugged rollbackSqlTransaction and startSqlTransaction and it showed me that:
- When I start e.g 5 tests, transaction opens 5 times but closes only one time.
When I have added empty step beetwen those 5 tests everythink worked fine.
When I have more than 5 tests it goes even more weird, e.g transaction starts 4 times then it closes then starts then closes etc. With 40 tests it opened 40 times and closed 29 times (its not stable too).
In my opinion for some reason tests are running so fast that executer is not able to close transaction, but I might be wrong. Is there anythink I can put in my code to slow them down? Or am I wrong and doing somethink else wrong?
Also when I'm running my tests it seems like few of those tests are starting at the same time (they arent going one by one), but it might be just an ilusion in eclipse.
Thanks for an answer and sorry for my bad english.
I think that transactions are not being closed properly because examples are being executed concurrently. Just add sequential at the beginning of your specification and things should be fine.

pytest: are pytest_sessionstart() and pytest_sessionfinish() valid hooks?

are pytest_sessionstart(session) and pytest_sessionfinish(session) valid hooks? They are not described in dev hook docs or latest hook docs
What is the difference between them and pytest_configure(config)/pytest_unconfigure(config)?
In docs it is said:
pytest_configure(config)called after command line options have been parsed. and all plugins
and initial conftest files been loaded.
and
pytest_unconfigure(config) called before test process is exited.
Session is the same, right?
Thanks!
The bad news is that the situation with sessionstart/configure is not very well specified. Sessionstart in particular is not much documented because the semantics differ if one is in the xdist/distribution case or not. One can distinguish these situations but it's all a bit too complicated.
The good news is that pytest-2.3 should make things easier. If you define a #fixture with scope="session" you can implement a fixture that is called once per process within which test execute.
For distributed testing, this means once per test slave. For single-process testing, it means once for the whole test run. In either case, if you do a "--collectonly" run, or "-h" or other options that do not involve the running of tests, then fixture functions will not execute at all.
Hope this clarifies.

NUnit- Object dispose

I have a test application, that contains 29 Test's inside Single TestFixture.I have defined single TestFixtureSetUp and TestTearDown.Each test internally create many Objects which inturn contains many Thread. Till Now I didnt used IDisposable.
My Doubt: Will the objects be disposed after completing each test by the Nuint. Please correct me if i am wrong.
Sorry, if i am wrong.
Thanks,
Vigi
AFAIK the objects will be processed by the garbage collector non-deterministically. Nothing special is done by NUnit to dispose of anything created in the test, I've often had situations where I've crashed the test runner. You'll probably want to manage and destroy the threads inside each test.

How should I deal with failing tests for bugs that will not be fixed

I have a complex set of integration tests that uses Perl's WWW::Mechanize to drive a web app and check the results based on specific combinations of data. There are over 20 subroutines that make up the logic of the tests, loop through data, etc. Each test runs several of the test subroutines on a different dataset.
The web app is not perfect, so sometimes bugs cause the tests to fail with very specific combinations of data. But these combinations are rare enough that our team will not bother to fix the bugs for a long time; building many other new features takes priority.
So what should I do with the failing tests? It's just a few tests out of several dozen per combination of data.
1) I can't let it fail because then the whole test suite would fail.
2) If we comment them out, that means we miss out on making that test for all the other datasets.
3) I could add a flag in the specific dataset that fails, and have the test not run if that flag is set, but then I'm passing extra flags all over the place in my test subroutines.
What's the cleanest and easiest way to do this?
Or are clean and easy mutually exclusive?
That's what TODO is for.
With a todo block, the tests inside are expected to fail. Test::More will run the tests normally, but print out special flags indicating they are "todo". Test::Harness will interpret failures as being ok. Should anything succeed, it will report it as an unexpected success. You then know the thing you had todo is done and can remove the TODO flag.
The nice part about todo tests, as opposed to simply commenting out a block of tests, is it's like having a programmatic todo list. You know how much work is left to be done, you're aware of what bugs there are, and you'll know immediately when they're fixed.
Once a todo test starts succeeding, simply move it outside the block. When the block is empty, delete it.
I see two major options
disable the test (commenting it out), with a reference to your bugtracking system (i.e. a bug ig), possibly keeping a note in the bug as well that there is a test ready for this bug
move the failing tests in a seperate test suite. You could even reverse the failing assertion so you can run the suite and while it is green the bug is still there and if it becomes red either the bug is gone or something else is fishy. Of course a link to the bugtracking system and bag is still a good thing to have.
If you actually use Test::More in conjunction with WWW::Mechanize, case closed (see comment from #daxim). If not, think of a similar approach:
# In your testing module
our $TODO;
# ...
if (defined $TODO) {
# only print warnings
};
# in a test script
local $My::Test::TODO = "This bug is delayed until iteration 42";