Running UI test in Jenkins Docker-Slave - eclipse

With the use of Jenkins Docker Plugin we can provision the slaves dynamically.
My need is to run UI tests on the automatically created slaves. Is that feasible? If yes, how can we achieve that?
UI tests are WindowTester test cases for eclipse based tool.

I am doing same kind of stuff, On successful build we are running all automated test cases on windows machine.
In your Jenkins, you need to add Windows machine as a Slave machine.
Try below tutorial -
https://wiki.jenkins.io/display/JENKINS/Step+by+step+guide+to+set+up+master+and+slave+machines+on+Windows
Once node is up and running then in your Job make sure you selected windows slave node.

Related

How to run junit system integration tests in kubernetes/service mesh?

We have a service mesh/kubernetes working via the terminal, showing all the different pods with their different name spaces. Inside of each pod, you can console in and see the app.jar.
Recently, boss/client asked how we can run the various SYSTEM INTEGRATION tests for any particular JAR from the service mesh/kubernetes command line. Google says to use 'mvn clean install', 'javac' or 'java -jar junit-platform-console-standalone-1.7.2.jar --class-path target --select-class '. These all fail for various reasons (mvn not present, javac not present, jar says that port is in use. Of course the port is in use, the same aforementioned jar is using it).
When I look at a pod in Gitlab (or Intellij) I see all the tests it has. But how I can run these SYSTEM INTEGRATION tests from the pod console? Ideally a command to run all tests, that would make things a lot easier.
edit:
lol at the heat in the comments. I clarified with the boss, she said that we want to run system integration tests from the service mesh, not unit tests. These pods are not isolated, some of them depend on each other.
Generally the comment from the user jonrsharpe could be an answer to the question:
That makes no sense as a request - you run the unit tests on the source code, then build and deploy the container if they pass. They shouldn't even be included in what's in the deployed jar.
If you need to test an application, do so before deploying it. You should have a separate environment where you will test your application, and only use Kubernetes when the application is working properly. You can of course use some CI type solution. Look at this page - Running JUnit tests with GitLab CI for Kubernetes-hosted apps.
EDIT
If you are looking for a solution to make integration testing with Kubernetes you can read a couple of docs. It all depends on what specifically you want to test. I present several possibilities:
Overcome Kubernetes Application Integration Testing Challenges with Telepresence
How we approached integration testing in Kubernetes, and why we stopped using Helm tests
Testing Kubernetes deployments within CI Pipelines

Swap Azure Agent to Service

Is there a clean way to swap an azure agent to run as a service. When I installed it I decided to install to have run manually. As time has gone on the need to convert to a service is becoming bigger and bigger. Is there an easy way to convert it over to a service without having to reinstall an agent?
Could always just tell windows to run it as a service which I imagine would work, but any other thoughts.
Thanks.
We could configure the self-hosted to run interactively or run as a service, if you have configured it run interactively, we can't change the way it works, if you want to change it, we need to remove and re-configure an agent
To remove the agent:
.\config remove
After you've removed the agent, you can configure it again.

Visual Studio Team Services Build Queue Not Appearing in List

I'm setting up a build definition in Visual Studio Team Services using a Build Agent installed on my local machine for testing.
I'm following these instructions for creating a build agent, setting up a build definition, and queuing a build. I've created the agent on my local computer and it appears in the agent pool in VSTS. The agent is enabled and ready to go. I've also created a build definition that invokes my build script. Everything up to this point appears to work fine.
At this point I'm ready to queue a build and run it. The dialog for this looks like:
The dropdown labeled "Queue" only shows the Hosted agent pool. There should be a second pool called Default but it is not appearing. I can get it to "appear" by right clicking and inspecting the HTML and then using dev tools to change the value for the Hosted option. Hosted's ID is 2, I changed it to 1 since I assumed this to be the ID for Default. Once I do this I can click "OK" and the build runs as expected -- everything is checked out on my local machine by the build agent. Presumably my assumption about the ID value is correct.
So...everything it working correctly once I muck around with the plumbing a bit. But this is definitely not the way things should be working. Why is the Default queue not showing up in the dropdown? Do I need to flip a switch somewhere to make it work? Does my account not have enough access?
Some other details:
My account is a "Pool Administrator"
The build agent is not installed as a Windows service. I start it manually from a command prompt. I've not been able to install it as a service.
The machine that has the build agent installed on it is running Windows 10 x64 Pro. It was upgraded from Windows 8 x64 Pro.
I cannot use a hosted agent as I'm building a Unity project and Unity is not supported by for hosted agents.
I know I can use Unity Cloud Build but I do not want to.
UPDATE
I've removed my previous Build Agent and installed a new one, as a service, on a Windows Azure VM running Windows 10 Enterprise x64. With this change the "Hosted" and "Default" queues are appearing as expected.
Your account needs to also have access to the agent queue. Agent pools and agent queues are different entities, and being a "pool administration" does not necessarily mean you are a "queue administrator".
In my case it helped to execute the agent configuration in a console with elevated/administrator rights. If the agent configuration is done in a console with normal rights, the agent can still be configured properly, but its queue won't appear for selection when you will be queuing a new build.

Jenkins - trigger builds automatically after server restart

Is it possible to trigger Jenkins to run jobs after server is restarted (that is, when Jenkins is started, for example)?
I thought this would be pretty simple but haven't found answer with brief googling.
Backround is that our Jenkins automatically deploys two Play applications after their tests pass at the same server for test use. (For both applications, we have a test build that triggers deployment build). Now it would be nice that applications would be up and running after server reboot.
There is an app... err plugin, for that :)
https://wiki.jenkins-ci.org/display/JENKINS/Startup+Trigger

Strategy for Automated UI testing on remote virtual machines

I'm using TeamCity for my CI builds, and I'd like to set up a second build for running automated UI tests on Windows XP and Windows 7 virtual machines.
I imagine the build working as follows:
Compile, run unit tests, etc.
Prepare MSI using WiX
Copy MSI to target test machines
Remotely execute MSI's
Copy test harness code to remote machine
Run tests
Build finishes
The automated UI tests are written using NUnit and would need to be run directly on the test virtual machine (they can't run remotely). It's important that if the tests fail, it appears in the TeamCity build log and the build fails. I'd rather not install VS or the TeamCity build agents on either of the test virtual machines.
It seems that most of this should be possible using psexec.exe. Are there any alternative (preferably open source) tools that I should look at?
takes a deep breath
We were looking into something to help us out with our automated UI tests. We use ranorex to test the UI and TeamCity/Msbuild to execute the tests.
We never found any tools to help us out (I’m constantly keeping an eye out for some so will monitor this thread) but here is what we did instead.
The CI server copies the setup files and test scripts to the Testing Host Server.
The CI server then launches a custom app on the Testing Host Server providing the name of the VM to launch.
The Test Host Server then launches the VM software, using Virtual PC.exe -singlepc -pc vhdname.vhd -launch , and waits for it to shutdown (after it’s run its tests).
The VM grabs the setup files and scripts from the network location and executes.
After the tests are run it then returns the results to a networked location and shuts itself down.
Control is returned to the custom app.
Control is returned to the CI server which determines from the results if it has passed or failed (and updates the UI so developers are made aware of the result).
Results are collection as artifacts in TeamCity and tagged in Svn.
I think that's everything. Its convoluted, however, it works. Hope someone of that helps you.
Jeff Brown of the Gallio team has been talking about a tool called Archimedes that he's planning to write to support this kind of requirement. It sounds promising, but I don't think there has been much progress on it so far.
In the mean time though, there is something in the Gallio project called VM Tool that may do what you want. It provides commands to stop, start and snapshot virtual machines, and more importantly, to copy files back and forth and execute commands.
I presume you have also considered Powershell Remoting?