I have an automated deployment process for a Java app where currently I'm building the app on a build machine, checking the build into scm, and having the production machine pull the build artifact (which is a zip) and through ant move the class and config files to where they're supposed to be.
I've seen other strategies where the production machine pulls the source from scm and builds it itself.
The thing I don't like about the former approach is that if I'm building for production instead of staging or dev or whatever, I have to manually specify the env in the build. If the target server were in charge of this, though, there would be less thought and friction involved in the build. However, I also like using the exact same build as was being tested on staging.
So, I guess my question is, is it preferred to copy the already build/already tested app to production or to have production build the app again once it's been tested.
If you already have an automated build system that is creating a testing build, how hard is it to extend that so that it builds both a testing build and a production build at the same time. This way you get the security of knowing they were built from the exact same checked out source and you have less manual labor. I really cringe at the idea of checking built artifacts into SCM!
I always prefer keeping as little as humanly possible on a production server - less to update, less to go wrong.
Related
I am working on a project with multiple people, a website application which requires webpack to be built, uglified, concatenated into a few files e.g. app.min.js, style.min.css etc. - As a result of this, in an effort to prevent merge conflicts we recently added the build folder to .gitignore, under the assumption that we would be able to build during deployment.
When pushing to the Master branch, we automatically "deploy" through Semaphore CI (similar to Travis) which runs composer install, npm install, and finally "npm run build" which triggers the webpack build. This is all built and then tested on the CI side of things, and then Semaphore automatically deploys to Amazon's Elastic Beanstalk where our application is hosted.
The problem with this is, it seems Semaphore doesn't upload the build it's just tested, but rather the Master branch itself which has no built JS or CSS. I'm wondering if there's a way to push these built files to deployment as well, or if running the entire build process AGAIN on Elastic Beanstalk is the only route. It seems unnecessary to have to do that process essentially 3 times, locally, CI, and then deployment. Every time a step like this is needed on EB the actual re-instantiation time gets longer, which I'd like to keep as short as possible.
Obviously if building it a 3rd time on EB is the only way to go about this then I'll have to, just wondering if there are better solutions for this whole workflow.
I haven't worked with Semaphore CI, but you might be able to use an .ebignore file.
If you create one, the cli will use that instead of your .gitignore file.
I find in some deployment situations you want the inverse of your .gitignore (all compiled, no src). It essentially lets you pick the files from your project directory that you want to deploy, in the same way as the .gitignore file.
Edit: I just noticed the documentation on aws is lacking. It only mentions file exclusion, but you can include files too.
Edit 2: I don't think Semaphore supports the use of .ebignore, so right now this solution isn't of any use. :(
I just had a great first experience with https://deploybot.com/. The can deploy directly to elastic beanstalk. It might be interesting or you.
I'm finalising my CI setup with TFS and I have one last part to overcome. Currently, every CI build is pushed to octopus with the nuget packages versioned as major.minor.patch.buildid, and these are used in releases in Octopus versioned as major.minor.patch-beta{buildid}. Therefore, every CI build will be tagged as a beta build for a given release, e.g. 1.3.0.194 / 1.3.0-beta194. There will no doubt be a number of iterative dev builds before one is chosen to go to the test team - let's assume the chosen build is 194 in this case. At this point I envisage the creation of a new build, 1.3.0-rc1, which uses the build 194 binaries. This is then pushed to the test environment and testing begins. There may be a number of testing cycles, so let's say the testers sign off the version 1.3.0-rc4. A new release could then be made, 1.3.0, based on the 1.3.0-rc4 binaries, which is the gold release for the product.
Firstly, is this a good idea? Some feedback would be much appreciated.
Seperately, is it possible to restrict deployments to certain environments based on a tag in the version? In my example, I would never want a release marked as -beta to be deployed to a test environment - only -rc builds should be. Likewise, only tag-less builds should be deployed to production environments.
Here is the reason why you definitely shouldn't do that.
When you create version 1.3.0.194, Octopus will guarantee that the artefacts, process, and variables are given a snap-shot that means the deployment will performed in the same way on every environment.
You can edit these things as much as you like, but 1.3.0.194 will not become a more risky deployment due to these changes, as it will be oblivious to them.
If you created version 1.3.0.194, then made changes to your deployment process, or variables, these changes would leak into the version 1.3.0.194-beta and it would no longer be the same deployment that you have tested. The same would occur when you change to version 1.3.0.194-rc - more changes would leak in that you haven't actually tested.
Your deployment process, just like your code, should be versioned and tested - and that is what you get by using the same version throughout your deployment lifecycle.
Tagging deployment packages with beta/rc is a bad idea because this introduces an extra step in your delivery process. You will have many built versions some will advance to upper environments, some won't. That's okay. Your trying to treat these like regular Nuget packages used for dependency versioning/management. Its different. Just increment your build numbers without the tags and promote the builds that get sign off.
You shouldn't restrict deployments to environment based on tags either.
We have 3 environments:
Development: Team City deploys here for Subversion commits on trunk.
Staging: User acceptance is done here, on builds that are release candidates.
Production: When UAT passed, the passing code set is deployed here.
We're using Team City and only have Continuous Integration setup with our development environment. I don't want to save artifacts for every development deployment that Team City does. I want an assigned person to be able to fire a build configuration that will deploy a certain successful development deployment to our staging server.
Then, I want each staging deployment to save artifacts. When a staging deployment passes UAT, I want to deploy that package to Production.
I'm not sure how to set this up in Team City. I'm using version 6.5.4, and I'm aware there's a "Promote..." action/trigger, but I think it depends on saved artifacts. I don't want to save development deployments each time as artifacts, but I do want the person running the staging deployment to be able to specify which successful development deployment to deploy to staging.
I'm aware there may be multiple ways to do this, is there a best practice? What is your setup and why do you recommend it?
Update:
I have one answer so far, and it's an idea we had considered internally. I'd really like to know if anyone has a somewhat automated way for deploying to a staging/production environemnt via Team City itself, where only people with certain role/permission can run a deploy script to production rather than having to manually deal with any kind of artifact package. Anyone?
Update 2
I still have 1 day to award bounty, and I thought the answer below didn't answer my question, but after rereading it I see that my question wasn't what I thought it was.
Are there any ways to use Team City for some kind of automated deployment to Staging/Production environments?
I think you're actually asking two different questions here; one is about controlling access rights to TeamCity builds and another is about the logistics of artifact management.
Regarding permissions, I assume what you mean by "only people with certain role/permission can run a deploy script to production" and your response to Julien is that you probably don't want devs deploying direct to production but you do want them to be able to see other builds in the project. This is possibly also similar to Julien's scenario when IT then take the process "offline" from TeamCity (either that or it's just IT doing what IT do and insisting they must use a separate, entirely inefficient process because "that's just the way we do it" - don't get me started on that!)
The problem is simply that all permissions in TeamCity are applied against the project and never the build so if you've got one project with all your builds, there's no ability to apply permissions granularity to dev versus production builds. I've previously dealt with this in two ways:
Handle it socially. Everyone knows what their responsibilities are and you don't run what you're not meant to run. If you do, it's audited and traceable back to YOU. Work fine when there's maturity, a clear idea of responsibilities and not compliance requirement that prohibits it.
Create separate projects. I don't like having to do this but it does fix the problem. You can still use artifacts from another project and means you simply end up with one project containing builds that deploy to environments you're happy for all the devs to access and another project to sensitive environments. The downside is that if the production build fails, the very people you probably want support from won't be able to access it!
Regarding artifact management, there's no problem with retaining these in the development build, just define a clean-up policy that only keeps artifacts from the last X builds if you're worried about capacity. A lot of people want certainty they're deploying the same compiled output to every environment which means once you build it, you want to keep it around for later use.
Once you have these artefacts from your dev deployment, you can re-deploy them to your other environments through separate builds. You'll have an issue with config transforms (assuming you're using them), but have a read of this 2 part series for some ideas on how to address that (I'm yet to absorb it in detail but I believe he's on the right track).
Does that answer your question? Is there anything still missing?
We also used TeamCity as our build server so let me explain our setup.
We have 4 environments
Development used by Dev to verify commits in a server environment
QA for testing purposes
Staging for deployment checks and some UAT
Production
We only use TeamCity to deploy to Development (Nightly builds) and to QA (on-demand).
The Dev build uses the trunk branch and QA build uses a different branch used for the RC.
Deployment to the Staging and Production are managed by the IT team, and are therefore not automated.
What we do instead is that we use TeamCity to produce artifacts from the QA build. The artifacts are the deployment kits sent for Staging/Production deployments.
That said, I am not sure if TeamCity would provide you a complete control on which build can be promoted to which environment. We basically control this on the SVN side with branches, and have different builds for those branches. You could (should) do be able to manage this it the same way. You can therefore ensure what is getting deployed.
I understand that your needs may be slightly different than ours but I hope that this will helps you finding the best setup.
I think you might want to check out something like Octopus Deploy or BuildMaster. They provide a nice structure for the deployment practices you're trying to automate. Both tools integrate with TeamCity nicely.
Basically, you'd continue to use TeamCity for CI, and you could also continue to deploy to your development environment with TeamCity too, but you'd use one of the deployment tools to promote an (existing) build to staging and production.
Edit 2014-02-05 – Update
The makers of BuildMaster have a new deployment feature – ProGet Deploy – for their NuGet server tool, ProGet. It's very similar to Octopus Deploy, tho I haven't played with it yet myself, so Octopus may have a better visualization of what versions have been deployed to which environments; I still use BuildMaster because of that important feature.
Also, I'm currently using both TeamCity, BuildMaster, and ProGet and I never want to go back to not having automated builds. Currently, all of my apps are built and deployed via BuildMaster. All of my library projects are built in TeamCity and deployed to ProGet. Being able to manage my internal dependencies via the NuGet infrastructure is nice.
I have finished developing the core of a web application I have been working on. Since I was the only developer I just developed locally (lamp stack) without using version control (probably stupid but anyway..). Now that it is getting close to production ready, I have a couple other developers working with me so I set up a repository for my code.
This is my question: I still want to be able to test any changes locally first before posting to production. How do I manage this with a repository without having to maintain 2 versions of my code (that I have to synch up manually)? For one, the production code has a few differences here in there (such as database constants etc.). I'd like to be able to change my code in my local repository, test it on my local apache server, then check the code directly into production (is this even possible using eclipse)?
I am using eclipse and subversion (php code). I know I asked many questions but hopefully you get the idea of what I am trying to do...and I assume its rather common. Thanks.
In addition to the excellent answers you've gotten already, I'd like to emphasize that if there are differences between your dev and production code, you're adding risk. You should be using the same, well-tested code in both locations; any difference between the environments should be expressed in configuration files. Any configuration files in source control should be samples only; your deployment script should not push new configuration files to production.
This, in combination with tagged releases and a staging environment that mimics production, should help you promote your code smoothly to the production environment.
I would suggest a few things
Use tags/branches in SVN. When the code is production ready, tag it with a unique name.
Set up a staging area for integration testing. After a release is tagged for staging, yank it from your vcs and copy it into the staging area. This can be as simple as a different directory tree or a second install of your server.
Put constants into separate files that can copied/merged over into the staging and deployment directories
Test the staged version against dev to insure everything works as it did in your dev environment. I would point staging to production databases when I am sure it is working and ready to be promoted. Test that it also works against prod.
Once everything works in staging, update the production copy. I would suggest you create a clean deployment directory then copy that entire deployment over to the production server after copying/merging config settings.
This was my approach is dealing with perl/cgi many years ago and it worked pretty well. SVN handles tags/branching much better so it should be easier to deal with. We had very few production problems once we started staging the files before pushing to prod.
It sounds like you haven't created any branches or tags, and probably have a "trunk" that isn't labeled as such. Best practices would dictate that you have a trunk for the current stable code, branches that you develop against, and tags that are actually used on the production site. There is a short description and diagram on Wikipedia.
Of course, that's just best practice. Your project sounds small enough that you could get away with splitting your code into a development/ directory and a production/ directory in your code repository. Checkin code to the development directory, and once a change is fully tested, merge it into the production directory.
Whether you do it the right way or the easy way, it's important to do something to separate your development code from your production code. As you add more developers, it will be increasingly unlikely that the development code base is stable because people are checking in code that hasn't been fully tested, isn't complete, whatever. Spending a little extra time on managing two branches of code will save you a lot of headaches later on.
In a CI environment, what exactly is considered a broken build?
There are several answers I can imagine (any combination of compiles, tests pass, metrics are in range, documentation exists etc.) , but I am not sure which of these are cannoncial.
For example, just today it happened to me that I actually checked in all code changes but forgot to commit the Visual Studio project file, thus breaking the unit tests. (even though I literally triple checked my commit, as it's a public OSS project on google code).
I was easily able to fix this in under a minute after my first commit, but should I consider myself a buildbreaker now?
How do you configure your CI environment: Is every revision built or only the newest version after each complete build, or do you use time based checking for new revs?
Ideally, you have
automated script that is scheduled to run each night to build the application from source code.
Scripts to copy the binaries to a directory/set of directories from which another script can be run to deploy the application if it is running in your environment, or used to create deliverables for a customer.
Automated suite of tests that run and verify all components pass all tests.
Automated script that verifies that the build has been built correctly.
Automated script/monitoring system that sends out an alert if the verification script fails.
When an alert is generated by the above process, then that is considered "breaking the build".
But since procedures/processes can vary from company to company, there may be alternate definitions. In some places it may be breaking the unit tests. Others it may be checking in source code that results in the code being unabl to compile.
Breaking the build is commiting any changes that make it impossible (or possible but not smart to do) to deploy the project.
Fixing a broken build does not repair the broken build, but makes it possible to create a new non-broken build.
I configure my CI-server to create a minimal build on every commit, and create a maximum build every period of time. The period depends on the number of people working on the project (more people is more commits) and the build duration (You can run your unit test suite every time, but the 30min. acceptance test suite once or twice a day).
Breaking a build is preventing any user that relies on the same standard tools and set of code than your CI environnment to get a compiling and running system.
If your coworkers cannot compile the system when they're up to date, because some config is missing, the build is broken, isn't it ?
If you coworkers cannot be confident that unit tests pass, because one of them is flaky, the build is broken, isn'it ?
If you have automated performance tests, and you're project have to be optimized, I would go as far as too say that if you code doesn't run fast enough, you've broken the build (but that is arguable)
I would not be so strongly minded about code coverage, or other metrics for example.
Breaking the build can happen. CI is just there to make sure it doesn't happen too much on the day you're supposed to ship ;)
for us, we use the term "breaking the build" any time the test suite fails after a new commit.
so, in your instance, yes you would have broken the build (according to our company at least)
As long as you broke the build because of a simple human error (like forgetting to commit something) and as long as this is an exception and not the rule, I would say it's OK. As long as you take care to fix this quickly :)
On the other hand, if somebody breaks builds on a regular basis by not executing the complete build locally on his/her machine before committing, then this is a sign of an ill-disciplined team member who doesn't really care for other team members and for the development process.
My experience is that one effective way to make people be more careful is to set up your CI server to send an email when (and only when) the status of the build changes, with the "culprit" on the To: list and the rest of the team on the Cc:. I guess you can could call it a "shame factor" ;)
A broken build is anything that doesn't pass the automated test suite.
You're a build breaker. ;) That's not really a big deal as long as you fix it fast. The whole point of a CI environment is to catch bugs, not make people afraid to commit.
My company builds the tip of each branch that is either in production or next up for production. We do this on every commit and each day at 4am. We're using Mercurial, so commit here means pushing changes to the integration repo.