Run capybara test against my staging server - rake

I'm new to capybara, and really happy with it's capabilities, I got a few feature tests that runs on my build server in a test environment.
I'm thinking it'll be a good practice to make some other set of tests that would run after a new version is published to my staging server (where the QA doing it's tests)
I need this tests to run on a remote server(does not look like an issue) and I need them to run on the staging environment.
How do I run one set of tests on the staging env and another on the test ?
Can I make a task for the staging?

There are multiple ways to do this. One would be to create a separate directory for tests against staging and then have RSpec run the tests in that directory when you want to test staging. Another would be to tag the features (or scenarios) for staging with staging: true metadata or something
feature 'these tests are done in staging', staging: true do
...
end
and then run rspec with -t staging (which you could set up in your Rakefile).

Related

Best practice to have different sets of test cases for different environments in Azure DevOps

I will try to explain the setup the procedure and hopefully someone can tell me what is the best approach to achieve my question at the end.
Setup:
We have the following environments for our application: QA UAT and Prod.
QA is our internal test environment whereas UAT and Prod are on customer side.
Procedure:
We prepare test cases in DevOps, and then run tests in the QA environment.
If all is ok and we are ready for UAT, application is deployed to UAT in order to have a test session (UAT session) with the customer, The test session will run test cases which are a subset of the original test cases which were used during the QA phase
Similar is repeated for Prod. except that usually the customer is not involved.
Question:
What is the best practice to have these different test cases sets (QA, UAT and Prod), in order to keep a record of the test runs on each of the environments.
I can think of:
Creating 3 test plans which reference the main set?
or creating 3 test suites?
or creating 3 configurations?
Your help is appreciated.
P.S. Mostly we do manual testing
different sets of test cases for different environments in Azure DevOps
We recommend you create 3 test plan for different environments, for each environments, you can create a test plan and import the existing test cases into that plan. You can also, if you wish, divide the test cases into separate test suites within the plan to enable easier management and monitoring of these separate sets of test cases.
Note: If you copy or clone the test cases. A copy creates a new baseline. Changes to these new test cases don't affect your previous test plans.

How to do automated integration tests using XUnit (.Net Core 2.1) and AzureDevOps?

I'm using Team Foundation Version Control as a source control for my .NET Core 2.1 project.
AzureDevOps is configured in continuous integration to checkout the code and build it.
We have 3 environments (Staging, PreProd, Prod). The Staging is not isometric with Prod so it is untrustworthy and we have to execute our integration tests on each environment with environmental data.
My build is generated by an agent in AzureDevOps on an OnPremise server which can only reach Prod environment.
I'd like to automate my XUnit integration tests in an AzureDevOps pipeline, however, I don't know where and how to do it. Am I supposed to execute the integration test step after building? or after releasing?
It looks like I need to deploy my binaries first on my environments, then execute the integration tests, and, if they go wrong, rollback the release.
Weird?!?
How can unblock this situation?
Regards,
If you want to run integration tests you need to first deploy your binaries to environment. You can do it as a separate:
step,
stage
pipeline
after deploying code.
Here it is up to you how you will do it. (To achieve last option you need to use pipeline triggers)
If you follow approach shift left, it means you detect issues as quickly as possible, you should don't worry about breaking them. If it happens on staging I would rather encourage you to fix the issue instead of roll backing code. Especially if it involves data model change.
And on production you can run only smoke tests, which are kind of integration tests which doesn't impact on state. They are like GET in REST - smoke tests should be idempotent, so you can run them without worrying bout changing state.
Since you use TFVC version, you could define a build pipeline to build and test your code, and then to publish artifacts. You also define a release pipeline to consume and deploy those artifacts to deployment targets.
As you have to execute integration tests on each environment with environmental data, you can run your XUnit integration tests in Release pipeline via VSTest task.

How to test Jenkins Workflow

Is there an example of how to do testing against the Jenkins Workflow groovy DSL?
Something similar to the example for the Jenkins Job DSL.
What I've done, is that I created a complete dev-test environment. I did it by using a docker-compose file that includes: jenkins, gitlab, and archiva. I push to a "jenkins-test" origin and run the workflow in the safe "test" environment.
Here's my docker-compose in case someone is interested as a starting point, or as a simple test env:
https://github.com/portenez/dry-dock
it's not fully automated, but it's a good start.
No, running a workflow script requires Jenkins to actually be running (since most of what it does is interact directly with Jenkins features like slaves and test results), so the only way to test it is to have a test Jenkins server and run it. By far the most convenient ways to do that in a fully automated way are:
Use JenkinsRule in the Jenkins test harness, like plugins would do in their test sources. Example
Use the acceptance-test-harness project as a dependency to create integration tests driven via Selenium. Example

Sailsjs 0.10.x : How to run the production grunt tasks for a staging environment

Sails.js 0.10x: I added an new file "staging.js" to the config/env folder.
Starting the server with >sails lift --staging shows that the staging file was used.
But it still uses the development grunt tasks. e.g. no minification, dev blueprint settings, etc.
I was wondering if there is an easy way to run the the prod grunt tasks with a new environment like staging?
In the sails node module of your application, you can navigate to the Grunt lib file (app_base/node_modules/sails/lib/hooks/grunt/index.js). If you look at the code there, under the initialize method, there's a condition to check if the environment is the production environment and calls the production Grunt task. Even though you could edit this condition to include your staging environment, this file isn't meant to be altered - updating the module in the future would erase any changes you make.
The best thing to do would be to go to app_base/tasks/register/prod.js and move your staging environment tasks to be run at prod and just use the prod environment. Alternatively, you could copy the production tasks over to your staging environment tasks.

Jenkins - Promoting a build to different environments

I was hoping for some guidance on the best way to promote a build through its environments.
We have 3 environments, DEV, STAGING, PROD.
The DEV Jenkins build is running in a continuous integration set-up, as code is checked in to subversion, Jenkins will run a new build (clean, compile, test, deploy).
The tricky bit is when it comes to STAGING and PROD.
The idea was to be able to manually promote a successful DEV build to STAGING.
STAGING build would check out the DEV's SVN Revision number, build, test, deploy to staging and finally create a branch in SVN.
Lastly the release manager could manually promote the STAGING build to PROD.
PROD build would check out the branch from the previous STAGING build, deploy to PROD and tag the branch as a release.
I have tried to use a combination of the Promotion Builds Plugin and the Paramterized Trigger Plugin but with no luck. The Subversion Revision number doesn't seem to get passed between DEV build to STAGING build.
Does anyone have any guidance on their process to promote a build through multiple environments?
Another approach is to make use of the Artifact storage Jenkins provides coupled with the Copy Artifact Plugin.
When a build is completed, you could instruct Jenkins to persist your application, either as a compressed zip/tar.gz or as an application bundle (jar/war)
Trigger a downstream job and use the Copy Artifact to retrieve the recorded artifact from the upstream job (or use parameterised builds)
Deploy/Unzip artifact as necessary - Build shell script/maven deploy?
Retest application using the same sources/binaries as was created in step 1
Repeat for PROD as necessary
This approach would allow you to fingerprint the artifacts, and thus Jenkins would link builds together in the UI, as well as allow more formal sign off.
In this scenario, why do you need to go back and label the branch in svn? We don't use svn, but w/ TFS, when Hudson/Jenkins gets the code, the changeset number it has retrieved is in the build log. So we know what code the build came from, and could get back to it at any time.
Then we promote the build from environment to environment using Hudson, the source control system doesn't need to know where the code is deployed.
If it's absolutely necessary to store the SVN Revision ID, then add a build step to your DEV job that copies it to a file. Something like this:
echo %SVN_REVISION%>revision.ini
or something like this:
echo MY_SVN_REVISION=%SVN_REVISION%>revision.ini
Then artifact revision.ini. When doing a STAGING build, use the Copy Artifact plugin (as mentioned by a previous user) to retrieve the revision.ini file specific to the build and load it into a variable. Then use that variable in a command line call to "svn" to build the tag.