This is regarding an issue I have been facing for sometime. Though I have found a solution, I really would like to get some opinion about the approach taken.
We have an application which receives messages from a host, does some processing and then pass that message on to an external system. This application is developed in Java and has to run on Linux/Oracle and HP-NonS top Tandem/SQLMX OS/DB combination.
I have developed a test automation framework which is written in Perl.This script traverses directories (specified as an argument to this script) and executes test cases specified under those directories. Test cases could be organized into directories as per functionality. This approach was taken to ensure that a specific functionality can also checked in addition to entire regression suite.For verification of the test results, script read test case specific input files which has sql queries mentioned in them.
In Linux/Oracle, Perl DBD/DBI interface is used to query Oracle database.
When this automation tool was run in Tandem, I came to know that there was no DBD/DBI interface for SQLMX. When we contacted HP, they informed us that it would be a while before they develop DBD/DBI interfaces for SQLMX DB.
To circumvent this issue, I developed a small Java application which accepts DB connection string, user name, password and various other parameters. This Java app is now responsible for test case verification functionality.
I must say it meets our current needs, but something tells me (do not know what) that approach taken is not a good one, though now I have the flexibility of running this automation with any DB which has a JDBC interface.
Can you please provide feedback on the above approach and suggest a better solution?
Thanks in advance
The question is a bit too broad to comment usefully on except for one part.
If the project is in Java, write the tests in Java. Writing the tests in a different language adds all sorts of complications.
You have to maintain another programming language and attendant libraries. They can have different caveats and bugs for the same actions, such as you ran into with a lack of a database driver in a certain environment.
Having the tests done in a different language than the project is developed in drives a wedge between testing and development. Developers will not feel responsible for participating in the testing process because they don't even know the language.
With the tests written in a different language, they cannot leverage any work which has already been done. They have to write all over again basic code to access and work with the data and services, doubling the work and doubling the bugs. If the project code changes APIs or data structures, the test code can easily fall out of sync requiring extra maintenance hassles.
Java already has well developed testing tools to do what you want. The whole structure of running specific tests vs the whole test suite is built into test suites like jUnit.
So I can underscore the point, I wrote Test::More and I'm recommending you not use it here.
Related
I am working on JPA project and I want to have unit tests (although as a database is required, in this case it will be more as integration tests.)
What is the best way to test JPA project? jUnit can do that ? Is there other better way ?
Thank you very much
You have given limited information on the tools/frameworks you are using and a very general question, but I will give a quick answer on the points you raise. These are just pointers however as I believe you need to do a good bit more leg-work in order for you to figure out what is best for your particular project.
Junit allows you to target your class methods with specific parameters and to examine the return values. The returned values maybe an entity that should have certain field at certain values, a list of entities with certain expected field values, exceptions etc., etc. (Whatever you methods are). You can run your test as you introduce new functionality, and re-run them to test for regression as development proceeds. You can easily test edge cases and non-nominal stuff. Getting Junit up and running in Java SE/EE is quite straight forward so that could be a good option for you to get stick-in with testing. It is one of the quicker ways I use to test new functionality.
Spring/MVC – Using an MVC framework can certainly be useful. I have used JSF/Primefaces. But that is principally because the application was to be a JSF application and such development tests gave confidence that the ‘Model’ layer provided what was needed to the rest of the framework. So this provides some confidence in the model/JPA/DB layers (it is certainly nice to see the data that is delivered) but does not provide for flexible, nimble and targeted testing you might expect from Junit.
I think Dbunit might be something to look at when you’ve made some progress with JUnit.
See http://dbunit.sourceforge.net/
DbUnit is a JUnit extension (also usable with Ant) targeted at
database-driven projects that, among other things, puts your database
into a known state between test runs. This is an excellent way to
avoid the myriad of problems that can occur when one test case
corrupts the database and causes subsequent tests to fail or
exacerbate the damage.
I'm currently working a MVC 5 project using Entity Framework 5 (I may switch to 6 soon). I use database first and MySQL with an existing database (with about 40 tables). This project started as a “proof of concept” and now my company decided to go with the software I'm developing. I am struggling with the testing part.
My first idea was to use mostly integration tests. That way I felt that I can test my code and also my underlying database. I created a script that dumps the existing database schema into a “test database” in MySQL. I always start my tests with a clean database with no data and creates/delete a bit of data for each test. The thing is that it takes a fair amount of time when I run my tests (I run my tests very often).
I am thinking of replacing my integration tests with unit tests in order to speed up the time it takes to run them. I would “remove” the test database and only use mocks instead. I have tested a few methods and it seems to works great but I'm wondering:
Do you think mocking my database can “hide” bugs that can occur only when my code is running against a real database? Note that I don’t want to test Entity Framework (I'm sure the fine people from Microsoft did a great job on that), but can my code runs well against mocks and breaks against MySQL ?
Do you think going from integration testing to unit testing is a king of “downgrade”?
Do you think dropping Integration testing and adopting unit testing for speed consideration is ok.
I'm aware that some framework exists that run the tests against an in-memory database (i.e. Effort framework), but I don’t see the advantages of this vs mocking, what am I missing?
I'm aware that this kind of question is prone to “it depends of your needs” kind of responses but I'm sure some may have been through this and can share their knowledge. I'm also aware that in a perfect world I would do both (tests by using mocks and by using database) but I don’t have this kind of time.
As a side question what tool would you recommend for mocking. I was told that “moq” is a good framework but it’s a little bit slow. What do you think?
Do you think mocking my database can “hide” bugs that can occur only when my code is running against a real database? Note that I don’t want to test Entity Framework (I’m sure the fine people from Microsoft did a great job on that), but can my code runs well against mocks and breaks against MySQL ?
Yes, if you only test your code using Mocks, it's very easy for you to have false confidence in your code. When you're mocking the database, what you're doing is saying "I expect these calls to take place". If your code makes those calls, it'll pass the test, but if they're the wrong calls, it won't work in production. At a simple level, if you add / remove a column from your database the database interaction may need to change, but the process of adding/removing the column is hidden from your tests until you update the mocks.
Do you think going from integration testing to unit testing is a king of “downgrade”?
It's not a downgrade, it's different. Unit testing and integration testing have different benefits that in most cases will complement each other.
Do you think dropping Integration testing and adopting unit testing for speed consideration is ok.
Ok is very subjective. I'd say no, however you don't have to run all of your tests all of the time. Most testing frameworks (if not all) allow you to categorise your tests in some way. This allows you to create subsets of your tests, so you could for example have a "DatabaseIntegration" category that you put all of your database integration tests in, or "EndToEnd" for full end to end tests. My preferred approach is to have separate builds. The usual/continuous build that I would run before/after each check-in only runs unit tests. This gives quick feedback and validation that nothing has broken. A less common / daily / overnight build, in addition to running the unit tests, would also run slower / repeatable integration tests. I would also tend to run integration tests for areas that I've been working on before checking in the code if there's a possibility of the code impacting the integration.
I’m aware that some framework exists that run the tests against an in-memory database (i.e. Effort framework), but I don’t see the advantages of this vs mocking, what am I missing?
I haven't used them, so this is speculation. I would imagine the main benefit is that rather than having to simulate the database interaction with mocks, you instead setup the database and measure the post state. The tests become less how you did something and more what data moved. On the face of it, this could lead to less brittle tests, however you're effectively writing integration tests against another data provider that you're not going to use in production. If it's the right thing to do is again, very subjective.
I guess the second benefit is likely to be that you don't necessarily need to refactor your code in order to take advantage of the in memory database. If your code hasn't been constructed to support dependency injection then there is a good chance that you will need to perform some level of refactoring in order to support mocking.
I’m also aware that in a perfect world I would do both (tests by using mocks and by using database) but i don’t have this kind of time.
I don't really understand why you feel this is the case. You've already said that you have integration tests already that you're planning on replacing with unit tests. Unless you need to do major refactoring in order to support the unit-tests your integration tests should still work. You don't usually need as many integration tests as you need unit tests, since the unit tests are there to verify the functionality and the integration tests are there to verify the integration, so the overhead of creating them should be relatively small. Using categorisation to determine which tests you run will reduce the time impact of running your tests.
As a side question what tool would you recommend for mocking. I was told that “moq” is a good framework but it’s a little bit slow. What do you think?
I've used quite a few different mocking libraries and for the most part, they are all very similar. Some things are easier with different frameworks, but without knowing what you're doing it's hard to say if you will notice. If you haven't built your code with dependency injection in mind then you may have find it challenging getting your mocks to where you need them.
Mocking of any kind is generally quite fast, you're usually (unless you're using partial mocks) removing all of the functionality of the class/interface you're mocking so it's going to perform faster than your normal code. The only performance issues I've heard about are if you're MS fakes/shims, sometimes (depending on the complexity of the assembly being faked) it can take a while for the fake assemblies to be created.
The two frameworks I've used that are a bit different are MS fakes/shims and Typemock. The MS version requires a certain level of visual studio, but allows you to generate fake assemblies with shims of certain types of object that means you don't have to pass your mocks from your test through to where they're used. Typemock is a commercial solution that uses the profiling API to inject code while your tests are running which means it can reach parts other mocking frameworks can't. These are both particularly useful if you've got a codebase that hasn't been written with unit testing in mind that can help to bridge the gap.
I usually made php forms and "try" to use "good practices" in them.
I'm concerned about the real safety and error-free of that forms and I want to do some tests simulating the customer behavior, and I do it manually, but I find that is a hard work specially when the form is large and I know that there are a lot of combinations that I can't test, so usually I find bugs in the production phase.
Is there a tool that do this? I listened about Selenium, did somebody use it in the way I need? Or how can I create my own test tools that simulate user inputs at random?
User inputs implies: not filling/checking all the fields, putting in invalid data, using differents setups (no javascript, browser versions, ...), SQL injections, and I don't know so more...
You'll need to consider a combination of approaches here: good test case design, data driving those tests with various input combinations, and an automation tool such as Selenium, WebDriver, Telerik's Test Studio (commercial tool I help promote), or some other automation tool.
Design your test cases such that you're focusing on groups of behavior (a successful path case, a case validating invalid input, a case validating protection against SQL injection, etc.). Then you can look to (perhaps) data drive those test cases with sets of inputs and expected results. You can randomize that as needed through your test case code.
Most good functional automation tools support multiple browsers running the same test script, so that's a good help for hitting multi-browser testing.
Above all, start your automation efforts with small steps and focus first on high-value tests. Don't spend time trying to automate everything because that costs you a lot of time.
Selenium is used to automate browsers in exactly the way you described.
Its used for what is called Functional Testing. Where you test the external aspects of an application to ensure that they meet the specifications.
Its is most often combined with unit tests that test the internal aspects. For example to test that your application is safe against different forms of SQL injection.
Each programming language usually has several different frameworks for writing unit tests.
This are often used together this with an approach called test driven development (TDD) where you write the tests before the application code.
I am coding a client-server application using Eclipse's RCP.
We are having trouble testing the interaction between the two sides
as they both contain a lot of GUI and provide no command-line or other
remote API.
Got any ideas?
I have about 1.5 years worth of experience with the RCP framework, I really liked it. We simply JUnit for testing...
It's sort of cliche to say, but if it's not easy to test, maybe the design needs some refactoring?
Java and the RCP framework provide great facilities for keeping GUI code and logic code separate. We used the MVC pattern with the observer, observable constructs that are available in Java...
If you don't know about observer / observable construct that are in Java, I would HIGHLY recommend you take a look at this: http://www.javaworld.com/javaworld/jw-10-1996/jw-10-howto.html, you will use it all the time and your apps will be easier to test.
As a former Test & Commissioning manager, I would strongly argue for a test API. It does not remove the need for User Interface testing, but you will be able to add automated tests and non regression tests.
If it's absolutely impossible, I would setup a test proxy, where you will be able to:
Do nothing (transparent proxy). Your app should behave normally.
Spy / Log data traffic. Add a filter mechanism so you don't grab everything
Block specific messages. Your filter system is very useful here
Corrupt specific messages (this is more difficult)
If you need some sort of network testing:
Limit general throughput (some libraries do this)
Delay messages (same remark)
Change packet order (quite difficult)
Have you considered using a UI functional testing tool? You could check out HP's QuickTest Professional which covers a wide varieties of UI technologies.
we are developing one client-server based application using EJB(J2EE) technology, Eclips and MySQL(Database). pl suggest any open source testing tool for functional testing .
thanks
Hitesh Shah
Separate your client-server communication into a pure logic module (or package). Test this separately - either have a test server, or use mock objects.
Then, have your UI actions invoke the communications layer. Also, have a look at the command design pattern, using it may help you.
Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us.
By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input.
In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption).
Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step.
Is there any other (simpler) way for doing such things (for example the message queue or anything else)?
Interesting problem! I've not done any low-level (think Win32) Windows programming in a while, but here's what I would do.
Use a named pipe and have your application listen to it. Using this named pipe as a communication medium, implement a real simple protocol whereby you can query the application for the name of a control given its HWND, or other things you find useful. Make sure the protocol is rich enough so that there is sufficient information exchanged between your application and the test framework. Make sure that the test framework does not yield too much "special behavior" from the app, because then you wouldn't really be testing the features, but rather your test framework.
There's probably way more elegant and cooler ways to implement this, but this is what I remember from the top of my head, using only simple Win32 API calls.
Another approach, which we have implemented for our product at work, is to record user events, such as mouse clicks and key events in an event script. This should be rich enough so that you can have the application play it back, artificially injecting those events into the message queue, and have it behave the same way it did when you first recorded the script. You basically simulate the user when you play back the script.
In addition to that, you can record any important state (user's document, preferences, GUI controls hierarchy, etc.), once when you record the script, and once when you play it back. This gives you two sets of data you can compare, to make sure for instance that everything stays the same. This solution gives you tests that not easy to modify (you have to re-record if your GUI changes), but that provide awesome regression testing.
(EDIT: This is also a terrific QA tool during beta testing, for instance: just have your users record their actions, and if there's a crash, you have a good chance of easily reproducing the problem by just playing back the script)
Good luck!
Carl
If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in.
use NUnitForms. I've used them with great success for single and multi threading apps and you don't have to worry about handles and stuff like that
Here are some posts about NUnitForms worth reading
NUnitForms and failed DragDrop registration - problem of MTA vs STA
Compiled application exe GUI testing with NUnitForms
I finally found a solution to communicate between the testing-application and the application-under-test: Managed Spy. It's basically a .NET application build on top of ManagedSpyLib.
ManagedSpyLib allows programmatic access to the Windows Forms controls of another process. For this it uses Window Hooks and memory-mapping files.
Thanks for all who helped me to get to this solution!
Managed Spy does not provide a solution for compact framework applications.
The company Jamo Solutions (www.jamosolutions.com) meets the requirements for automation testing on mobile devices, including .net compact framework applications.