I would like to test some DOM related things with intern, nothing requiring specific fixtures, just general DOM stuff like if I mutated the Element.prototype. Does that require a functional test run through a local Selenium Server (or sauce labs) or can that be done through the non-functional testing suite?
Intern doesn’t provide sandboxing to isolate unit test suites, so if you’re going to modify native objects for the purposes of testing, you’ll either need to restore them yourself later (in your suite teardown) or create your own sandboxing (by creating a new document or a new frame, depending upon what you are actually trying to test). You don’t need to use functional testing unless you’re trying to test things that can’t be reliably done from within the JavaScript sandbox (certain types of events, file uploads, multi-page navigation, cross-frame scripting, probably some other things).
Related
This is regarding an issue I have been facing for sometime. Though I have found a solution, I really would like to get some opinion about the approach taken.
We have an application which receives messages from a host, does some processing and then pass that message on to an external system. This application is developed in Java and has to run on Linux/Oracle and HP-NonS top Tandem/SQLMX OS/DB combination.
I have developed a test automation framework which is written in Perl.This script traverses directories (specified as an argument to this script) and executes test cases specified under those directories. Test cases could be organized into directories as per functionality. This approach was taken to ensure that a specific functionality can also checked in addition to entire regression suite.For verification of the test results, script read test case specific input files which has sql queries mentioned in them.
In Linux/Oracle, Perl DBD/DBI interface is used to query Oracle database.
When this automation tool was run in Tandem, I came to know that there was no DBD/DBI interface for SQLMX. When we contacted HP, they informed us that it would be a while before they develop DBD/DBI interfaces for SQLMX DB.
To circumvent this issue, I developed a small Java application which accepts DB connection string, user name, password and various other parameters. This Java app is now responsible for test case verification functionality.
I must say it meets our current needs, but something tells me (do not know what) that approach taken is not a good one, though now I have the flexibility of running this automation with any DB which has a JDBC interface.
Can you please provide feedback on the above approach and suggest a better solution?
Thanks in advance
The question is a bit too broad to comment usefully on except for one part.
If the project is in Java, write the tests in Java. Writing the tests in a different language adds all sorts of complications.
You have to maintain another programming language and attendant libraries. They can have different caveats and bugs for the same actions, such as you ran into with a lack of a database driver in a certain environment.
Having the tests done in a different language than the project is developed in drives a wedge between testing and development. Developers will not feel responsible for participating in the testing process because they don't even know the language.
With the tests written in a different language, they cannot leverage any work which has already been done. They have to write all over again basic code to access and work with the data and services, doubling the work and doubling the bugs. If the project code changes APIs or data structures, the test code can easily fall out of sync requiring extra maintenance hassles.
Java already has well developed testing tools to do what you want. The whole structure of running specific tests vs the whole test suite is built into test suites like jUnit.
So I can underscore the point, I wrote Test::More and I'm recommending you not use it here.
I usually made php forms and "try" to use "good practices" in them.
I'm concerned about the real safety and error-free of that forms and I want to do some tests simulating the customer behavior, and I do it manually, but I find that is a hard work specially when the form is large and I know that there are a lot of combinations that I can't test, so usually I find bugs in the production phase.
Is there a tool that do this? I listened about Selenium, did somebody use it in the way I need? Or how can I create my own test tools that simulate user inputs at random?
User inputs implies: not filling/checking all the fields, putting in invalid data, using differents setups (no javascript, browser versions, ...), SQL injections, and I don't know so more...
You'll need to consider a combination of approaches here: good test case design, data driving those tests with various input combinations, and an automation tool such as Selenium, WebDriver, Telerik's Test Studio (commercial tool I help promote), or some other automation tool.
Design your test cases such that you're focusing on groups of behavior (a successful path case, a case validating invalid input, a case validating protection against SQL injection, etc.). Then you can look to (perhaps) data drive those test cases with sets of inputs and expected results. You can randomize that as needed through your test case code.
Most good functional automation tools support multiple browsers running the same test script, so that's a good help for hitting multi-browser testing.
Above all, start your automation efforts with small steps and focus first on high-value tests. Don't spend time trying to automate everything because that costs you a lot of time.
Selenium is used to automate browsers in exactly the way you described.
Its used for what is called Functional Testing. Where you test the external aspects of an application to ensure that they meet the specifications.
Its is most often combined with unit tests that test the internal aspects. For example to test that your application is safe against different forms of SQL injection.
Each programming language usually has several different frameworks for writing unit tests.
This are often used together this with an approach called test driven development (TDD) where you write the tests before the application code.
When it comes to writing unit testing for UI what do you write test for?
Do you test each method? (EX: a method returns the correct data)
Or do you test the functionalities? (Making sure that the table populates the data it suppose to)
Do I need to mock everything except the item I am testing? Let's say I am testing to make sure a table view populates correctly? Do i mock everything else?
Please provide as much details as possibe
I'll try to answer this in a general way.
When testing UI-ish code it's often a good idea to target the tests "one step away" from the UI itself. Ex. run against the models instead of the UI itself if possible. It's much less brittle this way. I'm not familiar with iOS UI test automation but these sort of things tend to break upon the smallest layout changes etc.
I'll suggest you take a look at FoneMonkey by Gorilla Logic. They have a very nice utility for writing unit test which actually tests from the users perspective, aka. check that the UI is as it should be, ie. loads correctly, contains the correct values, etc.
You can even run it in a faceless environment, eg. Continuous Integration server, etc.
Obviously, the answer to the question depends on a number of environmental factors.
In general, I'm wondering what people's experiences are with HtmlUnitDriver as a reliable tool that can be "trusted" to navigate a website basically the same way other browsers do.
Of course, I realize "the way other browsers do" is pretty nebulous; naturally every browser will have its quirks. But I am on a project where we have hundreds of acceptance test scenarios (written in JBehave) and using FirefoxDriver and InternetExplorerDriver, running all of them takes over two hours, which is kind of rough from a continuous integration standpoint. So I'm wondering if it's at least feasible that we could switch our acceptance tests over to use HtmlUnitDriver and expect much faster times with mostly the same behavior (and perhaps we could expect a handful of tests to fail using HtmlUnitDriver and specifically run those tests with a browser-based driver).
Our UI uses GWT, which may or may not complicate things (I don't know).
Basically, in others' experience, does HtmlUnitDriver operate about as well as another browser, or is it really only appropriate for very simple HTML websites with minimal JavaScript and should not be used for an enterprise web application?
From my experiences with using HtmlUnitDriver I would say that if you don't use it as your baseline browser when writing your tests then converting them to use it becomes a bit of a nightmare. This is especially true when it comes to javascript heavy sites.
The main reason for this is the obvious underlying use of htmlunit which, by default, uses the Rhino javascript engine. In the past I've always had to specify that HtmlUnitDriver start htmlunit using Firefox's javascript engine. This, for the most part, solved the javascript issues I was finding while running tests using HtmlUnitDriver.
One of the biggest issues I faced when it came to using the same test code for each browser was if, on the site under test, the UI developers had assigned javascript events such as onClick() to html elements such as a <span>.
The reason for this is that if you were to use WebDriver's .click() method on a WebElement representing the <span>, then htmlunit would not do anything (it expects an onClick() to be called on elements such as an <input>).
To get around this I had to manually call a click() event in javascript. You can do this either by using WebDriver's JavascriptExecutor or by using a WebDriverBackedSelenium and Selenium's .fireEvent() method.
So if your site uses such events then I'd say switching to use HtmlUnitDriver could be a big task.
Despite this, I actually use HtmlUnitDriver for all my tests. However, I went through the pains of discovering all of the above a while back, so now use HtmlUnitDriver as my baseline browser when writing tests.
Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us.
By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input.
In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption).
Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step.
Is there any other (simpler) way for doing such things (for example the message queue or anything else)?
Interesting problem! I've not done any low-level (think Win32) Windows programming in a while, but here's what I would do.
Use a named pipe and have your application listen to it. Using this named pipe as a communication medium, implement a real simple protocol whereby you can query the application for the name of a control given its HWND, or other things you find useful. Make sure the protocol is rich enough so that there is sufficient information exchanged between your application and the test framework. Make sure that the test framework does not yield too much "special behavior" from the app, because then you wouldn't really be testing the features, but rather your test framework.
There's probably way more elegant and cooler ways to implement this, but this is what I remember from the top of my head, using only simple Win32 API calls.
Another approach, which we have implemented for our product at work, is to record user events, such as mouse clicks and key events in an event script. This should be rich enough so that you can have the application play it back, artificially injecting those events into the message queue, and have it behave the same way it did when you first recorded the script. You basically simulate the user when you play back the script.
In addition to that, you can record any important state (user's document, preferences, GUI controls hierarchy, etc.), once when you record the script, and once when you play it back. This gives you two sets of data you can compare, to make sure for instance that everything stays the same. This solution gives you tests that not easy to modify (you have to re-record if your GUI changes), but that provide awesome regression testing.
(EDIT: This is also a terrific QA tool during beta testing, for instance: just have your users record their actions, and if there's a crash, you have a good chance of easily reproducing the problem by just playing back the script)
Good luck!
Carl
If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in.
use NUnitForms. I've used them with great success for single and multi threading apps and you don't have to worry about handles and stuff like that
Here are some posts about NUnitForms worth reading
NUnitForms and failed DragDrop registration - problem of MTA vs STA
Compiled application exe GUI testing with NUnitForms
I finally found a solution to communicate between the testing-application and the application-under-test: Managed Spy. It's basically a .NET application build on top of ManagedSpyLib.
ManagedSpyLib allows programmatic access to the Windows Forms controls of another process. For this it uses Window Hooks and memory-mapping files.
Thanks for all who helped me to get to this solution!
Managed Spy does not provide a solution for compact framework applications.
The company Jamo Solutions (www.jamosolutions.com) meets the requirements for automation testing on mobile devices, including .net compact framework applications.