How do I run Urban Code Deploy FVT tests locally? - ucd

My project at work used Urban Code Deploy (UCD) for its continuous deployment process. My code runs locally and passes all unit tests, but the build group says that my code is failing the FVT test being run by UCD. Is there any way to run this FVT test locally, or at least attempt to run it, so I can hopefully figure out what is failing?
Mike

UC Deploy isn't a testing tool. So the team that has set it up, has it running some other testing tool at the end of hte deployment (which is pretty normal).
So you'll need to ask them what testing tool they're using and go from there.

If you can see how the build group is deploying your code, you should be able to see what testing they are doing, and then be able to replicate that in your own environment. Often the code changes and changes in requirements will not be reflected in the FVT tests, and you need to deliver updated FVT test scripts in conjunction with your code changes.

Related

How to write Integration Tests for MEAN application

Currently, I'm creating a project that incorporates the MEAN stack, Docker, and Travis CI. I'm using Travis CI to automate builds for unit testing, integration testing, etc. I'm using Docker to help create a test environment. I've already successfully created unit tests thanks to resources via Medium. However, I haven't found many resources on writing integration tests for a MEAN application. I want to create tests to see if I get expected values in the Angular application when it connects to the REST API endpoints from Express, and the Express application is connected to a MongoDB server. Does anyone have any resources or advice on how to write these tests, and to execute them in a Dockerized test environment?
Having done something similar myself, just a piece of advice.
Test the services independently, like e2e tests for the api server, mail service for the frontend web app. If the selenium tests run alright with the webpage/app, and the api end point is on the local machine then everything looks to be working. There is nothing magic in docker. Your local configs should reflect what you're trying to test, and avoid overcomplicating things and write the testing yourself.
Tools often take more time to learn than the actual thing you're trying to acomplish if you do it yourself. Document it adequatly so the consumer of the container can replicate with minimal effort.
It's actually pretty hard, good luck.

How can I automate long running test cases with VSTS?

The software I worked on has both unit tests and system tests. System tests can take minutes to run, they take input values and we validate the results against expected output. There are hundreds of system tests. The software must be built (done this) and tested on both windows and Linux.
How can I automate testing with VSTS? I'd like to avoid doing this at build stage, because it would slow the builds down. I can't see how to automate this in the Test stage. Do I need additional extensions to do this? Everything seems so geared up for web development, e.g. selenium tests, how do we run automated tests for good old binary programs?
I would suggest using Release Management to deploy your application to a test environment and then run your tests as a part of your Release Definition. You can then choose to run tests in parallel to make sure that your system tests don't take days to run.
On a side note, having so many system tests is a code smell. I would suggest looking into building as many fast running unit tests as possible and only using system tests when absolutely necessary.

How can I share deployment code between Lab Management and Release Management

After having just started using Microsoft Release Management, I am more and more convinced that it is not well suited to run integration tests. This might be a false feeling I'm having, and I'd love to get more input on this. When we first considered it, I had the intention to run the tests defined in our test plan through it's pipeline, but now I'm seeing that we should be running those as frequently as possible. We would like to run integration testing every night, but our release candidates are only defined at the end of sprints, so using Release Management for that seems conflicting.
With the tool out of the equation, we are considering exploring the Lab Template again. We did some very minor tests with it a few months ago in a legacy project but never went too far. My main concern now is that both stages need deployment:
the Release Management pipeline needs to deploy our projects to the QA and production environment
the Lab Template also needs to deploy the project on a few virtual machines to run integration tests on
The Release Management uses some very nice abstractions to achieve that. You can code machine scopes and define components based on the drop folder structure to define each part of the whole application to be deployed. On the other hand, the lab management workflow does not support this (or perhaps I'm just missing it). The standard way to make deployment work for lab testing, is to write a custom power shell script that moves the files from the build drop folder to the correct places, creates the application pools for web projects, and stuff like that, all by hand.
Ideally, I'd like to just share the entire deployment workflow between both tools and, since the Release Management way of doing it seems much simpler, I'd use that. This would make it easier to maintain both pipelines at the same time, which I assume is actually commonplace.
What is the correct approach to share the deployment code as much as possible between the two tools?
I would expect that better integration between RM and MTM/LM will be a future feature. In the interim, you could investigate using Desired State Configuration to handle having a single script that configures environments for you.
DSC support isn't really out-of-the-box in RM Update 2, but RM Update 3 will have built-in support for DSC to both Azure and on-prem VMs. Update 3 CTP 1 is out right now, but it's not production-ready.
You can still use DSC from RM in Update 2, it just requires a bit more work.

Salesforce.com deployment

We are currently working on a Salesforce.com custom APEX project that involves a lot of apex classes, triggers and Visualforce pages. We also have numerous applications from AppExchange that are part of the system.
We develop all the Apex Classes, Visualforce pages, etc in test environment and then deploy it to the live environment using Eclipse IDE. What happens is that every time we deploy changes to the live environment, all the test methods of all the classes (including those from AppExchange Apps) seems to be executing. So deployment of a simple change could end up taking couple of minutes.
Is there a way in apex to "package" classes by namespace or something like that so that when we try to deploy a change, only the test methods relevant to that package are executed. If something like that exists, our deployment can happen much faster.
Unfortunately no, there is no partial testing for deployment of apex code, every change, no matter how minute or self-contained triggers a full test run. This among other things enforces code metrics (minimum total code coverage for instance)
IMHO, this is proving to be a two-sided coin when it comes to enforcing code reliability. When we started using apex all of our tests were very comprehensive performing actual testing of the code with lots of asserts and checks. Then we started having very very long deploy times so now our tests serve one and only function, satisfying minimum code coverage, and even with that simplification it takes almost 3 minutes to deploy anything and we only use 20% of our apex code allowance.
IMHO2, Apex is way too slow of a coding platform to be enforcing this kind of testing. I cant even imagine how long the tests would run if we reach 50% allowance, not to mention any more.
This is possible but you'll need to learn about Apache Ant and have a look at the Force.com Migration Toolkit. You can then use a Build file to determine which files are deployed as well as which tests are run.
I'm busy writing a whitepaper that'll touch on this and other related development strategies... I'll post to my blog when it's done.
If we use the apache ant migration tool we have many options for deployment
like
deployCodeFailingTest which will skip the test classes
and if you want to run only specific test classes
please use : something similar to this in ur build.xml
<target name="deployCode">
`<sf:deploy`
username="${sf.username}"
password="${sf.password}"
serverurl="${sf.serverurl}"
deployroot="codepkg">
<runTest>SampleDeployClass</runTest>
</sf:deploy>
</target>
for detailed reference please use this link
http://www.salesforce.com/us/developer/docs/daas/salesforce_migration_guide.pdf
I would recommend the following approach:
Git as repository for all your sf code
jenkins to deploy your code as CI/CD
PMD as the static code analyser
sfdx as the deployment method in jenkins for deployment.
Refer the trailhead link: https://trailhead.salesforce.com/users/strailhead/trailmixes/architect-dev-lifecycle-and-deployment

Strategy for Automated UI testing on remote virtual machines

I'm using TeamCity for my CI builds, and I'd like to set up a second build for running automated UI tests on Windows XP and Windows 7 virtual machines.
I imagine the build working as follows:
Compile, run unit tests, etc.
Prepare MSI using WiX
Copy MSI to target test machines
Remotely execute MSI's
Copy test harness code to remote machine
Run tests
Build finishes
The automated UI tests are written using NUnit and would need to be run directly on the test virtual machine (they can't run remotely). It's important that if the tests fail, it appears in the TeamCity build log and the build fails. I'd rather not install VS or the TeamCity build agents on either of the test virtual machines.
It seems that most of this should be possible using psexec.exe. Are there any alternative (preferably open source) tools that I should look at?
takes a deep breath
We were looking into something to help us out with our automated UI tests. We use ranorex to test the UI and TeamCity/Msbuild to execute the tests.
We never found any tools to help us out (I’m constantly keeping an eye out for some so will monitor this thread) but here is what we did instead.
The CI server copies the setup files and test scripts to the Testing Host Server.
The CI server then launches a custom app on the Testing Host Server providing the name of the VM to launch.
The Test Host Server then launches the VM software, using Virtual PC.exe -singlepc -pc vhdname.vhd -launch , and waits for it to shutdown (after it’s run its tests).
The VM grabs the setup files and scripts from the network location and executes.
After the tests are run it then returns the results to a networked location and shuts itself down.
Control is returned to the custom app.
Control is returned to the CI server which determines from the results if it has passed or failed (and updates the UI so developers are made aware of the result).
Results are collection as artifacts in TeamCity and tagged in Svn.
I think that's everything. Its convoluted, however, it works. Hope someone of that helps you.
Jeff Brown of the Gallio team has been talking about a tool called Archimedes that he's planning to write to support this kind of requirement. It sounds promising, but I don't think there has been much progress on it so far.
In the mean time though, there is something in the Gallio project called VM Tool that may do what you want. It provides commands to stop, start and snapshot virtual machines, and more importantly, to copy files back and forth and execute commands.
I presume you have also considered Powershell Remoting?