I have a multiple services project which is deployed into kubernetes clusters. Currently we're trying to cover our frontend with Cypress tests. We're executing tests on our CI pipeline in Azure DEVOPS.
The main problem is to keep tests flow consistent as every test is able to change the test data. The ideal solution (what I see) would be to make them isolated by resetting database (loading data from dump) between test suites or even single tests. It works fine locally however in CI env seems to be not feasible - it also needs to reset cache (redis and memcached) and loading dump itself takes several minutes per time.
Would love if you can share your best experience and thoughts about that.
Thanks in advance!
Related
I will try to explain the setup the procedure and hopefully someone can tell me what is the best approach to achieve my question at the end.
Setup:
We have the following environments for our application: QA UAT and Prod.
QA is our internal test environment whereas UAT and Prod are on customer side.
Procedure:
We prepare test cases in DevOps, and then run tests in the QA environment.
If all is ok and we are ready for UAT, application is deployed to UAT in order to have a test session (UAT session) with the customer, The test session will run test cases which are a subset of the original test cases which were used during the QA phase
Similar is repeated for Prod. except that usually the customer is not involved.
Question:
What is the best practice to have these different test cases sets (QA, UAT and Prod), in order to keep a record of the test runs on each of the environments.
I can think of:
Creating 3 test plans which reference the main set?
or creating 3 test suites?
or creating 3 configurations?
Your help is appreciated.
P.S. Mostly we do manual testing
different sets of test cases for different environments in Azure DevOps
We recommend you create 3 test plan for different environments, for each environments, you can create a test plan and import the existing test cases into that plan. You can also, if you wish, divide the test cases into separate test suites within the plan to enable easier management and monitoring of these separate sets of test cases.
Note: If you copy or clone the test cases. A copy creates a new baseline. Changes to these new test cases don't affect your previous test plans.
I'm building a cluster visualization tool for Kubernetes that runs inside users' clusters.
My goal is to make this tool freely available. The most obvious way to distribute it is to tell people to kubectl apply -f www.ourgithub/our-configs.yaml, which pulls our images and voila.
That's all fine. Now the problem is how do we push updates?
I've considered these options but none seem very good:
Using something like https://github.com/chartmuseum/helm-push
Having the apps themselves check for updates and "restart" themselves (i.e imagePullPolicy=always scale to 0)
Having users download an executable on their machines that periodically checks for updates
I want to be able to push updates reliably so I want to make sure I'm using the most robust method there is.
What is the best practice for this?
Separate CI/CD pipeline for building and testing docker images and separate pipeline for deploying.
Your pipeline should deploy an application in a version that's is already running on the environment, deploy a new one, run e2e tests to verify everything is correct and then push a new version to the desired cluster.
This question concerns use of the Jenkins Workflow plugin and "synchronizing" a stage amongst independent jobs.
We have a generic workflow for multiple projects with steps:
build project
push project to test environment
run (long) end-to-end test suite
push project to production
Step 3 runs a long time. If multiple projects are built and pushed to the test environment within the same window of time, we'd like to only run once the end-to-end test suite.
Can we have the jobs some how synchronize on step 3?
The desired orchestration can be achieved by make Step 3 a build action. I.e.
build end-to-end-tests
Where end-to-end-tests is a job dedicated to running the slow end-to-end tests.
Adding a Quiet period to end-to-end-tests supports the goal of "collecting" projects updated within a time period to end-to-end test. That is, if project A and B are pushed to the test environment with Quiet period seconds, then end-to-end-tests runs only once.
JENKINS-30269 might be helpful, but your use case is indeed subtly different from the usual one that RFE would solve; you really seem to need a cross-job stage, which is not currently possible though in principle such a step could be written. In the meantime, a downstream deployment job is probably the most reasonable workaround.
We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.
I have an automated deployment process for a Java app where currently I'm building the app on a build machine, checking the build into scm, and having the production machine pull the build artifact (which is a zip) and through ant move the class and config files to where they're supposed to be.
I've seen other strategies where the production machine pulls the source from scm and builds it itself.
The thing I don't like about the former approach is that if I'm building for production instead of staging or dev or whatever, I have to manually specify the env in the build. If the target server were in charge of this, though, there would be less thought and friction involved in the build. However, I also like using the exact same build as was being tested on staging.
So, I guess my question is, is it preferred to copy the already build/already tested app to production or to have production build the app again once it's been tested.
If you already have an automated build system that is creating a testing build, how hard is it to extend that so that it builds both a testing build and a production build at the same time. This way you get the security of knowing they were built from the exact same checked out source and you have less manual labor. I really cringe at the idea of checking built artifacts into SCM!
I always prefer keeping as little as humanly possible on a production server - less to update, less to go wrong.