Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us.
By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input.
In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption).
Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step.
Is there any other (simpler) way for doing such things (for example the message queue or anything else)?
Interesting problem! I've not done any low-level (think Win32) Windows programming in a while, but here's what I would do.
Use a named pipe and have your application listen to it. Using this named pipe as a communication medium, implement a real simple protocol whereby you can query the application for the name of a control given its HWND, or other things you find useful. Make sure the protocol is rich enough so that there is sufficient information exchanged between your application and the test framework. Make sure that the test framework does not yield too much "special behavior" from the app, because then you wouldn't really be testing the features, but rather your test framework.
There's probably way more elegant and cooler ways to implement this, but this is what I remember from the top of my head, using only simple Win32 API calls.
Another approach, which we have implemented for our product at work, is to record user events, such as mouse clicks and key events in an event script. This should be rich enough so that you can have the application play it back, artificially injecting those events into the message queue, and have it behave the same way it did when you first recorded the script. You basically simulate the user when you play back the script.
In addition to that, you can record any important state (user's document, preferences, GUI controls hierarchy, etc.), once when you record the script, and once when you play it back. This gives you two sets of data you can compare, to make sure for instance that everything stays the same. This solution gives you tests that not easy to modify (you have to re-record if your GUI changes), but that provide awesome regression testing.
(EDIT: This is also a terrific QA tool during beta testing, for instance: just have your users record their actions, and if there's a crash, you have a good chance of easily reproducing the problem by just playing back the script)
Good luck!
Carl
If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in.
use NUnitForms. I've used them with great success for single and multi threading apps and you don't have to worry about handles and stuff like that
Here are some posts about NUnitForms worth reading
NUnitForms and failed DragDrop registration - problem of MTA vs STA
Compiled application exe GUI testing with NUnitForms
I finally found a solution to communicate between the testing-application and the application-under-test: Managed Spy. It's basically a .NET application build on top of ManagedSpyLib.
ManagedSpyLib allows programmatic access to the Windows Forms controls of another process. For this it uses Window Hooks and memory-mapping files.
Thanks for all who helped me to get to this solution!
Managed Spy does not provide a solution for compact framework applications.
The company Jamo Solutions (www.jamosolutions.com) meets the requirements for automation testing on mobile devices, including .net compact framework applications.
Related
I was talking to a friend of mine who knows a lot about js and wasm.He told me the technology goes far beyond web, since it is basicly a way to run near native applications on devices without actually giving them access to the computer.
Which means that thrid party or untrusted code on a smartphone for instance cannot accidentally or intentional change other apps or parts of the system.
This seamed to me like the perfect conditions to build a plugin system for an application I am working on.
I asked him about it but he was unable to give me a clear answer.
So the question is, can I use webassembly outside of a webbrowser, with custom bindings to safely allow users to extend the functionality of my application (a special image viewer) without sacrificing too much speed? It seams it should work using libnode or something, but is there a problem I might run into?
I don't know how much you know about web assembly but it depends on what your plugins actually should do. If it basically handle Arrays and numeric data with not that match interacting with host applications then it might fit. But when you have heavy object handling then it will not fit at the moment. So for image processing it might be perfect match like it is used in some web examples. Also be aware that some web assembly targeting system or not suitable for none web targets as they generate also some javascript code to be used in browsers beside the generate wasm. Some wasm modules for example require that you call malloc and free for string handling other have functions like new and gc for the nearly the same.
This is regarding an issue I have been facing for sometime. Though I have found a solution, I really would like to get some opinion about the approach taken.
We have an application which receives messages from a host, does some processing and then pass that message on to an external system. This application is developed in Java and has to run on Linux/Oracle and HP-NonS top Tandem/SQLMX OS/DB combination.
I have developed a test automation framework which is written in Perl.This script traverses directories (specified as an argument to this script) and executes test cases specified under those directories. Test cases could be organized into directories as per functionality. This approach was taken to ensure that a specific functionality can also checked in addition to entire regression suite.For verification of the test results, script read test case specific input files which has sql queries mentioned in them.
In Linux/Oracle, Perl DBD/DBI interface is used to query Oracle database.
When this automation tool was run in Tandem, I came to know that there was no DBD/DBI interface for SQLMX. When we contacted HP, they informed us that it would be a while before they develop DBD/DBI interfaces for SQLMX DB.
To circumvent this issue, I developed a small Java application which accepts DB connection string, user name, password and various other parameters. This Java app is now responsible for test case verification functionality.
I must say it meets our current needs, but something tells me (do not know what) that approach taken is not a good one, though now I have the flexibility of running this automation with any DB which has a JDBC interface.
Can you please provide feedback on the above approach and suggest a better solution?
Thanks in advance
The question is a bit too broad to comment usefully on except for one part.
If the project is in Java, write the tests in Java. Writing the tests in a different language adds all sorts of complications.
You have to maintain another programming language and attendant libraries. They can have different caveats and bugs for the same actions, such as you ran into with a lack of a database driver in a certain environment.
Having the tests done in a different language than the project is developed in drives a wedge between testing and development. Developers will not feel responsible for participating in the testing process because they don't even know the language.
With the tests written in a different language, they cannot leverage any work which has already been done. They have to write all over again basic code to access and work with the data and services, doubling the work and doubling the bugs. If the project code changes APIs or data structures, the test code can easily fall out of sync requiring extra maintenance hassles.
Java already has well developed testing tools to do what you want. The whole structure of running specific tests vs the whole test suite is built into test suites like jUnit.
So I can underscore the point, I wrote Test::More and I'm recommending you not use it here.
I took the time to set up some Unit Tests and set up the targets in XCode, etc., and they're pretty useful for a few classes. However:
I want to test small UI pieces for which I don't want to launch the entire application. There is no concept of pass/fail: I need to "see" the pieces, and I can make dummy instances of all the relevant classes to do this. My question is: how can I set this up in XCode?
I realize I could use another XCode project for each class (or groups of classes), but that seems a bit cumbersome. Another target for each?
I know that you're looking for an approach to testing UI components that doesn't require a fully functional application, but I've been impressed with what the new UI Automation instrument introduced in iOS 4.0 lets you do.
This instrument lets you use Javascript scripts to interactively test your application's interface, and it does so in a way that does not require checking exact pixel values or positions on a screen. It uses the built-in accessibility hooks present in the system for VoiceOver to identify and interact with components.
Using this instrument, I have been able to script tests that fully exercise my application as a user would interact with it, as well as ones that hammer on particular areas and look for subtle memory buildups.
The documentation on this part of Instruments is a little sparse, but I recently taught a class covering the subject for which the video is available on iTunes U for free (look for the Testing class in the Fall semester). My course notes (in VoodooPad format) cover this as well. I also highly recommend watching the WWDC 2010 video session 306 - "Automating User Interface Testing with Instruments".
Well, you cannot call showing a piece of some GUI a testing even if that GUI is a part of a large application. What you can do here is create a separate executable target and write a small tool that reuses GUI components from your application and shows them to you basing on input parameters. This will eliminate the need for many many different targets.
If you still insist on using unit tests, you can show your GUI for some period of time, for example, 10 seconds. So the test case will run until GUI is closed or timeout elapses and each test will take up to N seconds to execute.
This is a good question. I think you actually do not want to use unit tests for those 'visual confirmations'. Personally I usually write little test apps to do this kind of testing or development. I don't like separate targets in the same project so I usually just create a test project next to the original one and then reference those classes and resources using relative paths. Less clutter. And it is really nice to be able to test more complex user interface elements in their own little test environment.
I would take a two-level approach to UI "unit testing":
lthough Cocoa/CocoaTouch are still closer to the Model-View-Controller than the Model-View-ViewModel paradigm, you can gain much of the testability advantage by breaking your "View" into a "view model" and a "presenter" view (note that this is somewhat along the lines of the NSView/NSCell pair; Cocoa engineers had this one a long time ago). If the view is a simple presentation layer, than you can test behavior of the view by unit testing the "view model".
To test the drawing/rendering of your views, you will have to either do human testing or do rendering/pixel-based tests. Google's Toolbox for Mac has several tools for doing pixel-by-pixel comparison of rendered NSViews, CALayers, UIViews, etc. I've written a tool for the Core Plot project to make dealing with the test failures and merging the reference files back into your unit test bundle a little easier.
For learning purposes i'm developing a Class generation application in c# and winforms. I think It could be fine to include a command-line mode that allow to use the application in scripts.
It's a good practice to include a command-line mode in my applications? It would be better to have two different programs, one with GUI in one for the command-line?
Actually having a C# application be both console and GUI is problematic. Console applications (/t:exe) are launched and then the command prompt waits for them to finish. GUI applications (/t:winexe) the command shell launches them and then returns immediately. While you can create and run forms from a 'console' application, it will always have a background console displayed. On the other hand 'Forms' application don't have the stdin, stdout and stderr connected and, while they can behave as command line tools and process command arguments, they have problems when embedded in scripts (because the standard input/output is not hooked up).
If you want to expose the functionality from both GUI driven applications and scriptable/pipe-able batch processing too the best way is to compile your functionality into a class library, then built two separate applications (one GUI one console) that leverage that library.
I'm not a C# programmer, but when I program in C++, I find it most useful to:
1.) Create both a shared library with a C as well as C++ API for performing core app functionality.
2.) Create one or more commandline binaries accessible to the shell interpreter.
3.) Create a GUI application for typical end users, implemented with the library (not by invoking the binaries).
This separates the logic of the application from the interface to the application, and enables thirdparty developers to create alternative interfaces for the same application functionality. It also makes it easy to script, while at the same time catering to typical end users who want a nice, shiny GUI.
Yes. If you think the program will be useful in a scripted environment then include a command line mode (without UI) so it can be used in scripts.
It doesn't have to be a separate application, but it can be. Whether you want to do that or not is entirely up to you. I'd imagine that if you had two applications they'd share the same logic assemblies but the interface (one a GUI the other a command line) would just be different.
I agree with michaelsafyan about creating a library with core functionality.
What I would add is that you should check out powershell cmdlets as well.
Much command line activity will be migrating to powershell and it brings a lot to the table.
http://en.wikipedia.org/wiki/Windows_PowerShell
I very often create such a utility as an API. If I need to use it from a simple command-line utility, that's easy - it just calls the API. If the command-line gets too complex, maybe it's time for a Winforms application - which can also call the API. If I wanted to use it from PowerShell, or from an MSBUILD task, those are still easy - they just call the API.
Creating an application on the windows platform that behaves correctly as a console application can be problematic it's an issue with the windows kernel architecture as they're considered two different types of application (they have a different subsystem that you generally specify in the compiler or linker options). You can still manually redirect the IO and open a console from a win32 application by the win32 function AllocConsole() and friends but this also has some issues. See This Old New Thing post for more information.
If you want your utility/prgram run in scripts you can expose it as COM.
Many script languages for windows had the hability to use COM objects directly.
You should include a command line interface in your application,
if it enhances usability and comfort.
For instance, calling a CLI command might be faster then starting the GUI, navigating through several menu layers to reach the same functionality.
You might ask the users of your application, if they would find it useful to have a CLI mode.
Some words on marrying CLI & GUI on Windows:
A windows application is either a GUI application or a Console application, but not both. This is an OS issue and there is probably nothing one can do about it.
The console subsystem in Windows is horrible and PowerShell didn't change that.
Your implementation options on Windows are:
the two files approach:
Provide two files: one .com with console, one .exe with GUI.
Because of the executable probing on the command line, the com file will get executed before the exe.
the console flickering approach:
Compile your GUI application with console mode on, then immediately after the start of the GUI you might call FreeConsole() to close it.
It's a bit annoying, but works. Bad: now you have a flickering console window. Pro: still one file.
I agree with #Remus Rusanu.
you should create a class library of your core functionality and then build GUI app(wrapper) for that.
and one other benefit of it is you might not even need to create a command line app as you can access your .net dll features using powershell..
you can find one example over here
Another great idea is to embed a scripting language. Then your program can be controlled by a script, and you get all the logic, branching, etc from the scripting language "for free."
There are many choices of what you can embed. Lua is one of the most popular and intended for just that purpose and is an excellent choice.
However, for a general purpose app, I'd take a hard look at embedding Python. Python is so popular, you'd have a larger group of people willing to take the effort to write a script for your app.
I am coding a client-server application using Eclipse's RCP.
We are having trouble testing the interaction between the two sides
as they both contain a lot of GUI and provide no command-line or other
remote API.
Got any ideas?
I have about 1.5 years worth of experience with the RCP framework, I really liked it. We simply JUnit for testing...
It's sort of cliche to say, but if it's not easy to test, maybe the design needs some refactoring?
Java and the RCP framework provide great facilities for keeping GUI code and logic code separate. We used the MVC pattern with the observer, observable constructs that are available in Java...
If you don't know about observer / observable construct that are in Java, I would HIGHLY recommend you take a look at this: http://www.javaworld.com/javaworld/jw-10-1996/jw-10-howto.html, you will use it all the time and your apps will be easier to test.
As a former Test & Commissioning manager, I would strongly argue for a test API. It does not remove the need for User Interface testing, but you will be able to add automated tests and non regression tests.
If it's absolutely impossible, I would setup a test proxy, where you will be able to:
Do nothing (transparent proxy). Your app should behave normally.
Spy / Log data traffic. Add a filter mechanism so you don't grab everything
Block specific messages. Your filter system is very useful here
Corrupt specific messages (this is more difficult)
If you need some sort of network testing:
Limit general throughput (some libraries do this)
Delay messages (same remark)
Change packet order (quite difficult)
Have you considered using a UI functional testing tool? You could check out HP's QuickTest Professional which covers a wide varieties of UI technologies.
we are developing one client-server based application using EJB(J2EE) technology, Eclips and MySQL(Database). pl suggest any open source testing tool for functional testing .
thanks
Hitesh Shah
Separate your client-server communication into a pure logic module (or package). Test this separately - either have a test server, or use mock objects.
Then, have your UI actions invoke the communications layer. Also, have a look at the command design pattern, using it may help you.