Integrate automated GWT GUI testing with build system - gwt

I am in flux for integrating an automated GUI testing with my build system. My GUI application is developed in GWT. I use HUDSON as my automated build system. I would like to perform sanity test of my application. As I understand, the entire test setup will have following steps.
Build and deploy the application in predefined application server. In my case, it would be create and install the application in Android emulator.
Start/Launch the application.
Perform pre-defined user actions(UI Test cases) and validate them.
Somehow include validations for different browsers. I am really not sure how can I do this.
Generate report of test cases performed.
I am not posting the details of application as I think this detail will not make any difference in the approach. Can somebody guide me using past experience if this is possible and if it is then to what extent. The best UI automation tool (preferably open source) which can fit easily here.

We use TeamCity as build server for a GWT application. We just use it as a build server with two tasks: compile sources into Javascript, and deply war file to Tomcat application server. Although I didn't manually set it up yet, I believe it's possible to add a third task for UI testing using Selenium (which we used for another JSF web application testing).
A fairly good example of using Selenium automated testing is RichFaces. If you download its source code package, it includes hundreds of UI-testnig codes written generated by Selenium.

Related

How to implement a Selenium code in cloud tool like crossbrowser testing?

We are doing a Selenium based web application testing. We have a requirement to test the application in different browsers.
The requirement is also use browsers from cloud applications like crossbrowser testing.(https://crossbrowsertesting.com/). We are not getting enough idea how to implement and execute our script within crossbrowser tool.

How can I get code coverage reports when testing REST API with TestNG?

I have a question that's very similar to what's discussed here:
Integration Test of REST APIs with Code Coverage
I deployed a war file that exposes the REST APIs to a web server and I'm using TestNG to write test cases for the REST APIs. I'm not unit testing - I'm only end-to-end / integration testing. Currently, I'm running test cases from eclipse in my machine.
My goal is to get coverage reports on the TestNG test cases.
Since the tests are local to my machine and the REST API is deployed in another server, EclEmma doesn't provide any meaningful data when I run the tests cases in my machine.
Is there a way to point EclEmma to the web server instead of my local machine and get the code coverage report?
Would it be better/possible to include the tests in the war file and run the tests from the web server? That should allow me to get the meaningful code coverage report, right?
The easiest way forward in cases like this is normally to start the web server inside of your IDE and run tests with coverage measuring in there. Even better to start the web server from within the tests - then a build tool like maven can also do code coverage reporting.

Web framework with user-friendly desktop deployment?

I'm building a web app with Backbone.js (I'm not tied to Backbone yet though). I need a back-end framework only for persistence to a database via a RESTful API. However, I also need to able to deploy it as a 'desktop' app for off-line use, i.e. running a local server and launching a browser window, but I don't want users to have to start a server from the command line to run the application.
I can use SQLite as a database since it's only a single user application, it's just the framework that I'm stuck on. I have looked at the following:
Rails and Django: Default web servers are too flimsy, requires Ruby/Python and runs from the command line. I'm aware of the Bitnami stacks but at 99mb it's too big of a dependency and not exactly hidden from the user.
Sproutcore: Run from command line, also too bulky.
Pyjamas Desktop - Depends on MSHTML which I suspect limits my ability to use HTML5 features.
I'm leaning towards creating a Java app that starts a Scala/Lift server instance and opens a web browser, then sits in the system tray (kind of like WAMP). Is anyone familiar with a tool or framework built for user-friendly deployment as a standalone desktop app?
I do not know if PHP is an option for you? Then I would recommend phpdock.
web2py has a standalone deploy-to-desktop feature with no dependency on Python: http://web2py.com/books/default/chapter/29/14#How-to-distribute-your-applications-as-binaries
As Eydun said, phpdock is an option but it's commercially licensed .
I settled on using Java/Spring/H2/Hibernate/Jetty. I find that Jetty serves requests VERY quickly so the application looks real-time when launched in a browser. There is a tutorial on embedding the Jetty server here. I imagine it's quite trivial to build a GUI that launches the server and a browser.
Another Java option is to use the Play Framework, which may be more at home to those coming from a Django/Rails background. However, the documentation for "creating a standalone version of your application" for Play 2.0+ indicates that they have ditched using Java EE containers (Tomcat/Jetty) and WAR files in favor of running the JARs with the bundled copy of JBoss Netty, so it may take a bit of work to get it running the way you want it.
I would recommend the Play Framework approach if you're OK with using/learning Scala.

eclipse metadata refresh without opening eclipse

We are working with various cloud platform(like. salesforce etc) and we need sync with server everyday. would like to know is there way that we can in our development box to synchronize all eclipse projects through some script without opening the IDE and open the IDE without much freezing.
This would enable to do clean sync( with cloud server) and refresh with local files.
This would enable to do refresh( for non cloud server ).
running a little ant or some kind of script would have development stable unique environment across all developers?
Any help would be appreciated.
It's going to GREATLY depend on what cloud platforms you are using. HOWEVER, i work with the salesforce platform. They offer (per their dev. docs) an ant API jar that allows you to write ant scripts that can essentially check out everything in your org.
Essentially you can use it to check out and check back in pieces and parts of the website. Though this of course only works for SFDC. For other platforms you will need to refer to their API's or write your own tools.

Salesforce.com deployment

We are currently working on a Salesforce.com custom APEX project that involves a lot of apex classes, triggers and Visualforce pages. We also have numerous applications from AppExchange that are part of the system.
We develop all the Apex Classes, Visualforce pages, etc in test environment and then deploy it to the live environment using Eclipse IDE. What happens is that every time we deploy changes to the live environment, all the test methods of all the classes (including those from AppExchange Apps) seems to be executing. So deployment of a simple change could end up taking couple of minutes.
Is there a way in apex to "package" classes by namespace or something like that so that when we try to deploy a change, only the test methods relevant to that package are executed. If something like that exists, our deployment can happen much faster.
Unfortunately no, there is no partial testing for deployment of apex code, every change, no matter how minute or self-contained triggers a full test run. This among other things enforces code metrics (minimum total code coverage for instance)
IMHO, this is proving to be a two-sided coin when it comes to enforcing code reliability. When we started using apex all of our tests were very comprehensive performing actual testing of the code with lots of asserts and checks. Then we started having very very long deploy times so now our tests serve one and only function, satisfying minimum code coverage, and even with that simplification it takes almost 3 minutes to deploy anything and we only use 20% of our apex code allowance.
IMHO2, Apex is way too slow of a coding platform to be enforcing this kind of testing. I cant even imagine how long the tests would run if we reach 50% allowance, not to mention any more.
This is possible but you'll need to learn about Apache Ant and have a look at the Force.com Migration Toolkit. You can then use a Build file to determine which files are deployed as well as which tests are run.
I'm busy writing a whitepaper that'll touch on this and other related development strategies... I'll post to my blog when it's done.
If we use the apache ant migration tool we have many options for deployment
like
deployCodeFailingTest which will skip the test classes
and if you want to run only specific test classes
please use : something similar to this in ur build.xml
<target name="deployCode">
`<sf:deploy`
username="${sf.username}"
password="${sf.password}"
serverurl="${sf.serverurl}"
deployroot="codepkg">
<runTest>SampleDeployClass</runTest>
</sf:deploy>
</target>
for detailed reference please use this link
http://www.salesforce.com/us/developer/docs/daas/salesforce_migration_guide.pdf
I would recommend the following approach:
Git as repository for all your sf code
jenkins to deploy your code as CI/CD
PMD as the static code analyser
sfdx as the deployment method in jenkins for deployment.
Refer the trailhead link: https://trailhead.salesforce.com/users/strailhead/trailmixes/architect-dev-lifecycle-and-deployment