Is deployment to a test environment a part of Continuous Integration? [closed] - deployment

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have checked quite a lot of sources and it still remains unclear to me: is deployment to a test environment part of CI, or is CI just about committing often and keeping the mainline bug-free and integrated? Some say this, some say deployment to a target environment is a part of CI.
Otherwise I do not seem to see the difference between CI and Continous Delivery.

Continuous integration might or might not require something you'd consider a deployment to a test environment. The main point of CI is that automated tests are run on a version of the software to ensure that that version i's ready to be deployed to the next step (QA, staging, production, or whatever the next step in one's process is). So the software is deployed if that's needed for the software to be tested and not if it's not.
There's always a test environment of some kind, because the automated tests have to run on some computer(s), but the code might or might not get there through what you'd consider a deployment. For example, if an application is in an interpreted language, running automated tests might require nothing more than copying the source to the test environment and running a script, not actually deploying.
Whether deployment is needed for automated testing depends on what kind of automated tests the application has. If it only has unit tests, no deployment is needed. If it has full-stack integration tests, deployment might or might not be needed depending on the integration test framework. For example, the integration testing framework which is a part of Rails runs a test-specific version of a Rails server for tests to talk to, so those test don't require deployment. On the other hand, other frameworks might not provide that support, so the application would have to be deployed to the test environment to give full-stack integration tests something to run against. Or a CI build might include automated performance tests; those would certainly need to run against the application deployed to a test environment.

Related

Is there a need to run containers on local/dev machine? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 days ago.
Improve this question
I want to explore options for development pocess (web api + worker services) having on mind deployment to Azure Container Apps.
In particular, I am wondering, is there any reason for running containers on developers machine or should apps be ran and unit tested locally without containers and then use containers only from ci/cd pipeline?
In that case, integration tests should also be performed in ci/cd pipeline only.
Whats also important is that different devs in a team can have different machines (windows, macos, linux) and we want to have unified dev process for all.
What is a typical development flow?
This is mostly opinion based and how well debugging works for your specific stack. For example, I work with blazor web assembly and most of the time I debug in containers, because my application is hosted in podman, however if I am investigating an client side issue containers are not convenient because debugging does not working properly.
With containers you are is close as shipping your dev machine to the cloud as possible.

How do I run Urban Code Deploy FVT tests locally?

My project at work used Urban Code Deploy (UCD) for its continuous deployment process. My code runs locally and passes all unit tests, but the build group says that my code is failing the FVT test being run by UCD. Is there any way to run this FVT test locally, or at least attempt to run it, so I can hopefully figure out what is failing?
Mike
UC Deploy isn't a testing tool. So the team that has set it up, has it running some other testing tool at the end of hte deployment (which is pretty normal).
So you'll need to ask them what testing tool they're using and go from there.
If you can see how the build group is deploying your code, you should be able to see what testing they are doing, and then be able to replicate that in your own environment. Often the code changes and changes in requirements will not be reflected in the FVT tests, and you need to deliver updated FVT test scripts in conjunction with your code changes.

How can I share deployment code between Lab Management and Release Management

After having just started using Microsoft Release Management, I am more and more convinced that it is not well suited to run integration tests. This might be a false feeling I'm having, and I'd love to get more input on this. When we first considered it, I had the intention to run the tests defined in our test plan through it's pipeline, but now I'm seeing that we should be running those as frequently as possible. We would like to run integration testing every night, but our release candidates are only defined at the end of sprints, so using Release Management for that seems conflicting.
With the tool out of the equation, we are considering exploring the Lab Template again. We did some very minor tests with it a few months ago in a legacy project but never went too far. My main concern now is that both stages need deployment:
the Release Management pipeline needs to deploy our projects to the QA and production environment
the Lab Template also needs to deploy the project on a few virtual machines to run integration tests on
The Release Management uses some very nice abstractions to achieve that. You can code machine scopes and define components based on the drop folder structure to define each part of the whole application to be deployed. On the other hand, the lab management workflow does not support this (or perhaps I'm just missing it). The standard way to make deployment work for lab testing, is to write a custom power shell script that moves the files from the build drop folder to the correct places, creates the application pools for web projects, and stuff like that, all by hand.
Ideally, I'd like to just share the entire deployment workflow between both tools and, since the Release Management way of doing it seems much simpler, I'd use that. This would make it easier to maintain both pipelines at the same time, which I assume is actually commonplace.
What is the correct approach to share the deployment code as much as possible between the two tools?
I would expect that better integration between RM and MTM/LM will be a future feature. In the interim, you could investigate using Desired State Configuration to handle having a single script that configures environments for you.
DSC support isn't really out-of-the-box in RM Update 2, but RM Update 3 will have built-in support for DSC to both Azure and on-prem VMs. Update 3 CTP 1 is out right now, but it's not production-ready.
You can still use DSC from RM in Update 2, it just requires a bit more work.

Recommendation for Automated Deployment tools for Windows Environment? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
As a .NET programmer, I create Windows Services and Web Applications.
These are deployed on a staging environment and then, after regression tests are successful, they are deployed on a production environment.
Although it looks more like an IT related question, I think that when a version changes, for instance, it's only the programmer who actually knows the impact of the new version, and which service should be taken offline in order to change its DLLs, then taking it online again (just an example).
So I think that it's the programmer's job to create some kind of an automated script (or something) to be executed on each target machine.
Do you happen to know such a framework for Windows Server 2003/2008 machines?
My requirements for such framework are:
Human readable script (or anything else which is textual, such as XML)
Can detect a service by its name and start/stop it
Can detect a web application installed on IIS and start/stop it
Copy/Create/Move/Delete/Compress/Decompress files and folders
Can send emails
Do you happen to know such a framework for Windows Server 2003/2008 machines?
Thanks in advance!
For the record, I found that a PowerShell script can be helpful to my needs.
I think it's a good practice to use native tool on each platform.
In Windows it's a Windows Installer, and based on it - Wix and InstallShield.
We use Wix in our company for many projects, and deployment of both client and server components.
It's XML based (as you asked) and quite simple to start using immediately.
Some concepts are tricky to understand, but then you get tons of information in Internet, and once you've done something, you'll never forget.

Solutions for automated deployment in developer environments?

I am setting up an automated deployment environment for a number of decoupled services that are in active development. While I am comfortable with the automated deployment/configuration management aspect, I am looking for strategies on how best to structure the deployment environment to make things a bit easier for developers. Some things to take into consideration:
Developers are generally building web applications, web services, and daemons -- all of which talk to one another over HTTP, sockets, etc.
The developers may not have all running on their locally machine, but still need to be able to quickly do end to end testing by pointing their machine at the environment
My biggest concern with continuous deployment is that we have a large team and I do not want to constantly be restarting services while developers working locally against those remote servers. On the flip side, delaying deployments to this development environment makes integration testing much more difficult.
Can you recommend a strategy that you have used in this situation in the past that was worked well?
Continuous integration doesn't have to mean continuous deployment. You can compile/unit test/etc the code "continuously" thoughout the day without deploying it and only deploy at night. This is often a good idea anyway - to deploy at night or on demand - since people may be integration testing during the day and wouldn't want the codebase to change out from under them.
Consider, how much of the software can developers test locally? If a lot, they shouldn't need the environment constantly. If not a lot, it would be good to set up mocks/stubs so much more can be tested on a local server. Then the deployed environment is only needed for true integration testing and doesn't need to be update constantly throughout the day.
I'd suggest setting up a CI server (Hudson?) and use this to control all deployments to both your QA and production servers. This forces you to automate all aspects of deployment and ensures that the are no ad-hoc restarts of the system by developers.
I'd further suggest that you consider publishing your build output to a repository manager like Nexus , Artifactory or Archiva. In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like chef, puppet, ControlTier can be used to further version control the configuration of your infrastructure.
I agree with Mark's suggestion in using Hudson for build automation. We have seem successful continuous deployment projects that use Nolio ASAP (http://www.noliosoft.com) to automatically deploy the application once the build is ready. As stated, chef, puppet and the like are good for middle-ware installations and configurations, but when you need to continuously release the new application versions, a platform such as Nolio ASAP, that is application centric, is better suited.
You should have the best IT operation folks create and approve the application release processes, and then provide an interface for the developers to run these processes on approved environments.