Unit Testing for pgxpool - postgresql

I am looking to write unit tests for Go code that uses pgxpool to interact with a postgres database. Is there a test framework that will stand up a dummy or mock database to test on?
I am aware of a package called pgxpoolmock. The problem is, there is no way to test actual production code, as a pool from pgxpool cannot be casted to the type pgxpoolmock (or can it?)

pgxpoolmock is using an interface pgxpoolmock.PgxPool. This interface has all the methods that are being used in the pgxpool.Pool struct. pgxpoolmock.MockPgxPool is implementing that interface. pgxpoolmock.NewMockPgxPool(ctrl) returning a struct that implements the interface pgxpoolmock.PgxPool. So if you want to use pgxpoolmock, you have to use pgxpoolmock.PgxPool in your application code not in test code alone.

Related

Citrusframework - Java action - Get result

Besides REST-API Calls, I need to call my own Java-Class, which basically does something, which I want to confirm later in the test via REST-API-Calls.
When calling my Java-Class, there is an expected behavior: It may fail or not fail, depending on the actual Test-Case.
Is there any chance to code this expectation this into my test-class:
java("com.org.xyz.App").method("run").methodArgs(args).build();
As this is the Main-Class, which should be executed later in a automated fashion, I would prefer to validate the Return-Code.
However, I'm looking for any possible way (Exception-Assertion, Stdout-Check, ..) to verify the status of the program.
As you are using the Java DSL to write test cases I would suggest to go with custom test action implementation and/or initializing your custom class directly in the test method and call the method as you would do with any other API.
You can wrap the custom code in a custom AbstractTestAction implementation as you then have access to the TestContext and your custom code is integrated into the test action sequence.
The java("com.org.xyz.App").method("run").methodArgs(args) API is just for compliance to the XML DSL where you do not have the opportunity to initialize own Java class instances. Way too complicated for your requirement in my opinion.

How do I undo a Setup call for a moq Mock?

This might be a special use case that I am dealing with here. Here is what my simple C# NUnit test that uses Moq looks like
Mock<ISomeRepository> mockR = new Mock<ISomeRepository>();
mockR.Setup(x => x.GetSomething).Returns(new Something(a=1,b=2);
--use the mocked repository here
Now later in this same unit test or another test case I want to invoke the real implementation of the method GetSomething() on this mockR object.
Is there a way to do that? My repository is Singleton at its heart. So even if I create a new object, the GetSomething method still returns the same Moq'd object.
That would largely depend on your implementation of that GetSomething, which is something you're not showing here ;). Also, I'm not sure that's even a valid setup, shouldn't there be a .Setup(..).Returns(..) there?
Mocks are used to represent dependencies of a class allowing that class to be tested without using their actual dependencies. Or you can do tests which involve the actual dependencies.
But using a mocked dependency and the real dependency within the same unit test sounds like you're not clear what your test is testing.
If it's another test case, it shouldn't be a problem either. Each test should not impact another, so if you set up the class under test separately that should be fine, even with a singleton.
I'm assuming that you're injecting the singleton dependency. If not, do that.

ASP Boilerplate problems using Effort in unit testing with EFProf (Entity Framework Profiler)

Having issues using EFProf (http://www.hibernatingrhinos.com/products/EFProf) with ASP Boilerplate (http://www.aspnetboilerplate.com/).
For unit testing, ASP Boilerplate uses Effort (https://github.com/tamasflamich/effort) for mocking the database in-memory.
If I run the unit tests without adding the reference to EFProf, the tests run correctly (green).
If I add the initialization line:
HibernatingRhinos.Profiler.Appender.EntityFramework.EntityFrameworkProfiler.Initialize();
in either my test base ctor or my application project's Initialize(), I get the following error:
Castle.MicroKernel.ComponentActivator.ComponentActivatorException
ComponentActivator: could not instantiate MyApp.EntityFramework.MyAppDataContext
The inner exception has the relevant information:
Error: Unable to cast object of type 'Effort.Provider.EffortConnection' to type 'HibernatingRhinos.Profiler.Appender.ProfiledDataAccess.ProfiledConnection'.
Is Effort just not compatible with EFProf? Or am I doing something blindingly obvious wrong in my initialization?
Answering my own question: Effort fakes the DbContect object but does not actually create SQL for in-memory, thus there is nothing to intercept by profilers. It is also the reason why the CommandText is always null when using EF6's Database.Log with Effort.
Am going to try using Moq with EF6 to use an in-memory database implementation for testing as an alternative to Asp Boilerplate's testing project that utilizes Effort per this article: https://msdn.microsoft.com/en-us/library/dn314429(v=vs.113).aspx

Cucumber's AfterConfiguration can't access helper modules

I have a modular Sinatra app without a DB and in order to test memcache, I have some test files that need to be created and deleted on the file system. I would like to generate these files in an AfterConfiguration hook using some helper methods (which are in a module shared with rspec, which also needs to create/delete these files for testing). I only want to create them once at the start of Cucumber.
I do not seem to be able to access the helpers from within AfterConfiguration, which lives in "support/hooks.rb." The helpers are accessible from Cucumber's steps, so I know they have been loaded properly.
This previous post seems to have an answer: Want to load seed data before running cucumber
The second example in this answer seems to say my modules should be accessible to my AfterConfiguration block, but I get "undefined method `foo' for nil:NilClass" when attempting to call helper method "foo".
I can pull everything out into a rakefile and run it that way, but I'd like to know what I'm missing here.
After digging around in the code, it appears that AfterConfiguration not only runs before any features are loaded, but before World is instantiated. Running self.class inside of the AfterConfig block returns NilClass. Running self.class inside of any other hook, such as a Before, will return MyWorldName. In retrospect, this makes sense as every feature is run in a separate instance of World.
This is why helpers defined as instance methods (ie def method_name) are unknown. Changing my methods to module methods (ie def ModuleName.method_name) allows them to function, since they really are module methods anyway.

Issue with GHUnit Testing for iPhone

I am working on Unit Test using third party framework GHUnit, created project added GHUnit framework and other framework which are needed.
I created one class called TestCases, in that import library GHUnit and class which need to write test case.
I need to write test cases for 40 classes.
Do i need to write all test cases in one single class.
Do i need to create each class for testCase?
If Yes then when i try to create new class in separate testCase1,testCase2....testCase40 it can't able to show those testCases1 testCase2 ...testCase40
IT shows me a tableview and run button and only first testCases methods, its not showing me remaining testCases Class method.
Please advice in this situation. What action i need to do for this
#Advance thanks you all.,
Separate unit testing (functionality testing) integration testing (complete system working testing)
UNIT TESTING: (for each of those 40 classes)
Usually write different test class for each of the class, so that if there is a single change in any of the class can test it by specifically running that particular class, so if there is one or 40 or 100s of class better write unit test for each of them and ensure there functionality.
In each class better write different test cases for testing different functionality, so that it will be easy to identity (for a third person, not the one who develop it and written test case for that) where the error come from, and manage them.
Each function better test only one case, write different test cases for testing different functional behavior of each functions. So it may result with 100 test cases in a single class for testing a class with 10 functions. But it is good.
INTEGRATION TESTING: (for testing depended functionality of 40 classes)
When come to integration testing, write test cases for different behavior of complete system, in a single class with different possibilities (test cases).
And finally “Spend more time for testing than coding”.
Also ensure the coverage of test cases for the code is between 90% to 100%.