Background:
We have one Jenkins job (Production) to build a deliverable every night. We have another job (ProductionPush) that pushes out the deliverable over a proprietary protocol to production machines the next day. This is because some production machines are only available during certain hours during the day (It also gives us a chance to fix any last-minute build breaks). ProductionPush needs access to the deliverable built by the Production job (so it needs access to the same workspace). We have multiple nodes and concurrent builds (and thus unpredictable workspaces) and prefer not to tie the jobs to a fixed node/workspace since resources are somewhat limited.
Questions:
How to make sure both jobs share the same workspace and ensure that ProductionPush runs at a fixed time the next day only if Production succeeds -- without fixing both jobs to run out of the same node/workspace? I know the Parameterized Trigger Plugin might help with some of this but it does not seem to have time delay capability and 12 hours seems too long for a quiet period.
Is sharing the workspace a bad idea?
Answer 2: Yes, sharing workspace is a bad idea. There is possibility of file locks. There is the issue of workspace being wiped out. Just don't do it...
Answer 1: What you need is to Archive the artifacts of the build. This way, the artifacts for a particular build (by build number) will always be available, regardless of whether another build is running or not, or what state the workspaces are in
To Archive the artifacts
In your build job, under Post-build Actions, select Archive the artifacts
Specify what artifacts to archive (you can use a combination of below)
a) You can archive all: *.*
b) You can archive a particular file with wildcards: /path/to/file_version*.zip
c) You can ignore the intermediate directories like: **/file_version*.zip
To avoid storage problems with many artifacts, on the top of configuration you can select Discard Old Builds, Click Advanced button, and play around with Days to keep artifacts and Max # of builds to keep with artifacts. Note that these two settings do not control for how long the actual builds are kept (other settings control that)
To access artifacts from Jenkins
In the build history, select any previous build you want.
In addition to SCM changes and revisions data, you will now have a Build Artifacts link, under which you will find all the artifacts for that particular build.
You can also access them with Jenkins' permalinks, for example
http://JENKINS_URL/job/JOB_NAME/lastSuccessfulBuild/artifact/ and then the name of the artifact.
To access artifacts from another job
I've extensively explained how to access previous artifacts from another deploy job (in your example, ProductionPush) over here:
How to promote a specific build number from another job in Jenkins?
If your requirements are to always deploy latest build to Production, you can skip the configuration of promotion in the above link. Just follow the steps for configuration of the deploy job. Once you have your deploy job, if it is always run at the same time, just configure its Build periodically parameters. Alternatively, you can have yet another job that will trigger the deploy job based on whatever conditions you want.
In either case above, if your Default Selector is set to Latest successful build (as explained in the link above), the latest build will be pushed to Production
Not sure archiving artifacts is really a good idea. A staging repository might be better as it enables cross-functional teams to share artifacts across different builds when required by tweaking the Maven settings.xml file.
You really want a deployable (ear/war) as the thing that gets built, tested, then promoted to production once confidence is high with the build.
Use a build number on your deployable (major.minor.buildnumber). This is the thing you promote to production, providing your tests can be relied upon. Don't use a hyphen to separate minor with build number as that forces Maven to perform a lexical comparison... a decimal point will force a numeric comparison which will give you far less headaches.
Also, you didn't mention your target platform, but using the Maven APT/RPM plugin to push an APT/RPM to a APT/YUM repo that's available to a production box (AFTER successful testing!) would be a good fit, as per industry standards?
Related
I am new to using Azure DevOps builds/pipelines, as the source code for the solutions I need to build are in TFVC I am limited to using the Classic (i.e. UI) builds rather than YAML.
When I want to test changes to a build definition I sometimes want to run a clean build, i.e. ensure that sources and artifacts from earlier builds are removed before the new build run, yet leave normal builds (i.e. ones triggered by changes in TFVC) to be incremental so to make builds faster.
I am used to TeamCity which has a plethora of options with regards to managing source and artifacts retention between builds, including a simple "clean" check box when triggering a manual build.
ADO Builds seems very limited in this regards and if I want to have a clean build it seems the only option is to change the build definition, select clean, run the build, then change the build definition again to remove the clean option.
Are there better ways to manage "ad-hoc" clean sources and artifacts in ADO Builds?
Those settings are either on/off. They wouldn't accept conditional run-time variables.
That being said, you might try leveraging the "Save as draft" option. It seems to create a DRAFT pipeline definition you could execute for your changes.
You could probably just flip it back to no clean before publishing. I don't really use that feature, but I am going to guess on the back-end it is using a different temporary definition id. That will probably mean on the build agent a new folder gets created under _work. If that is the case, you probably wouldn't even need to flip the clean sources since it will not exist the first run. It also probably means if this a self-hosted agent you will have doubled the work folder size and you might have to manage that clean-up after you are done.
If it does create the second work folder, this is probably preferable as it means you won't break the incremental build on the build directly following your test with clean. Whether you did that ad-hoc or through editing the build definition.
Build.Clean variable is deprecated, you can only use Clean option to clean local repo on the agent currently.
I'd like suggest you submit a user voice at website below, product team will evaluate it carefully:
https://developercommunity.visualstudio.com/content/idea/post.html?space=21
One workaround is adding a Post Build Cleanup task in the end of your pipeline, when you want to run builds incrementally, you can disable this task.
I'm having a hard time figuring out how to correctly deploy to different environments with TeamCity (in terms of cross BuildConfiguration dependencies) and hope to get some input as to how to configure my SubProjects/BuildConfigurations properly. Lets start based on a concrete example: I made this test "TeamcityConfigurationTests" to better learn how TeamCity handled dependencies, and the current state shows the result i am looking for:
I have 3 subProjects, Dev, Test and Prod - and all associated tasks for those "environments" as seperate build configurations within that subProject. This is to more clearly visualize what is going on, and if anything breaks, to be able to see immediately what is broken (separate Build, UnitTest and DeployToDev BuildConfigurations, rather than 3 different steps in one single Build Configuration).
Ideally, i only want to build my application once in the Dev.Build step, and let the Dev.UnitTest and Dev.DeployToDev steps grab that artifact and run tests and deploy. That i got going for me, by having snapshot and artifact dependencies. But i am having trouble getting the correct artifact when i want to deploy from Dev -> Test or Test -> Prod.
My issue is to correctly reference the latest successfully DEV deployed artifact when running Test.DeployToTest - and the same for getting the latest successfully TEST deployed artifact when running Prod.DeployToProd. (Essentially i want to promote the artifact to the next environment).
Now, my issue is, if I in the Test.DeployToTest have a SnapshotDependency to Dev.DeployToDev and an artifact dependency to Dev.Build, and the VCS source has changed since Deploy to Dev has run, it triggeres running all the DEV steps again. This is not the worst part, the same happens when i run Prod.DeployToProd if the VCS source changed since the initial build on dev (because of all the snapshot dependencies). Meaning, that rather than promoting Test -> Prod, I Build and deploy whatever is currently on VCS to Dev, Test AND Prod.
How am i supposed to set this up correctly?
The only other option i am aware of, is letting Dev.DeployToDev also publish the same artifact, and only have an (LatestSuccessful) ArtifactDependency in Test.DeployToTest. I would also have to publish the artifact again in Test.DeployToTest, for letting Prod.DeployToProd only have a (LatestSuccessfull) artifact dependency to Test.DeployToTest. (This would be to get rid of the SnapshotDependencies causing previous environments to run build/deploy again in case of VCS changes). But then i am publishing the artifact 3 times, rather than just the one time when the application is originally built in DEV - which i would like to avoid. Also, i have cases where no artifact is needed for deploying to Test and Prod, so there is no artifact to depend on (essentially i only need the BuildNumber from the "Dependent" environment i want to promote from).
I hope for some input. Thank you
Regards
Frederik
For anyone wondering, i made a JetBrains support ticket, and got the following response:
Basically, there are options to resolve your case:
Option 1: use "Promote" action form the build's Actions top-right menu
(or change the type of the Deploy* configurations to deployment and
use the action from the block on the build results. This is the
preferred way: before deploying you select the build to deploy and
"promote" it to the next environment. There is also an experimental
hidden feature to hide the "Run" button: add
"teamcity.ui.runButton.caption" configuration parameter in the build
configurations to empty value.
Option 2: do not use snapshot dependency, use only artifact dependency
on the latest successful build. However, when you run the build you
cannot be sure that the last successful build you see will be
deployed: while the build is standing int he queue, another
Dev.DeployToDev can finish and then be deployed as the last
successful.
We went with option 1
I am working on a project with multiple people, a website application which requires webpack to be built, uglified, concatenated into a few files e.g. app.min.js, style.min.css etc. - As a result of this, in an effort to prevent merge conflicts we recently added the build folder to .gitignore, under the assumption that we would be able to build during deployment.
When pushing to the Master branch, we automatically "deploy" through Semaphore CI (similar to Travis) which runs composer install, npm install, and finally "npm run build" which triggers the webpack build. This is all built and then tested on the CI side of things, and then Semaphore automatically deploys to Amazon's Elastic Beanstalk where our application is hosted.
The problem with this is, it seems Semaphore doesn't upload the build it's just tested, but rather the Master branch itself which has no built JS or CSS. I'm wondering if there's a way to push these built files to deployment as well, or if running the entire build process AGAIN on Elastic Beanstalk is the only route. It seems unnecessary to have to do that process essentially 3 times, locally, CI, and then deployment. Every time a step like this is needed on EB the actual re-instantiation time gets longer, which I'd like to keep as short as possible.
Obviously if building it a 3rd time on EB is the only way to go about this then I'll have to, just wondering if there are better solutions for this whole workflow.
I haven't worked with Semaphore CI, but you might be able to use an .ebignore file.
If you create one, the cli will use that instead of your .gitignore file.
I find in some deployment situations you want the inverse of your .gitignore (all compiled, no src). It essentially lets you pick the files from your project directory that you want to deploy, in the same way as the .gitignore file.
Edit: I just noticed the documentation on aws is lacking. It only mentions file exclusion, but you can include files too.
Edit 2: I don't think Semaphore supports the use of .ebignore, so right now this solution isn't of any use. :(
I just had a great first experience with https://deploybot.com/. The can deploy directly to elastic beanstalk. It might be interesting or you.
I'm trying to split up a few Jenkins jobs using the Build Flow plugin so that instead of three monolithic jobs, we have three "starting points" that then use the DSL to trigger downstream jobs. I chose Build Flow over the Build Pipeline plugin because it seemed like it was a lot harder to share jobs between different pipelines ( ie, sharing the workspace of the multiple starting jobs with a single compile job ).
Previously, I had three jobs set up: Project-PR, Project-DEV, and Project-PROD.
Project-PR would build whenever a pull request happened in GitHub, and would just run a smaller subset of our unit tests, so that we could get quick verification that the PR is okay to merge.
Project-DEV would build whenever a feature branch was merged in GitHub into the main development branch, as well as having the ability to be manually triggered and given a different branch to pull. It would run the full suite of unit -- basically a sanity check that everything is still good. Then it would compile and minify, and push to a QA environment for testing, and then it would run the full suite of integration tests against that QA environment. This step was configured as a parametrized build, with the parameter being the name of the branch to pull, test, and push. It would push to and set up QA environment specific to that branch, so that we could QA multiple features without having to merge to development ( ie, feature-one.qa.example.com, feature-two.qa.example.com ).
Project-PROD would only ever be manually triggered, and would do the full unit and integration test suite, compile and minify the front-end code ( Less, JS, and CSS ), and push the built code into a special "release branch" in GitHub that can then be deployed -- we haven't quite reached the point of Jenkins being in charge of deployment.
Now, what I wanted to set up was to split the subtasks into their own jobs, so that it'd be easy to set up new jobs to without having to copy and paste all the build steps ( or copying the job and changing all the things that need to be unique ). This would let us do things like create a copy of the Project-DEV, but switch out the last job for one that deploys to a staging environment set up in the cloud. Or easily create a job that could report test results to a third party source, ie copy the results to a shared network folder or something. Or any number of things. The goal is basically to use these subtask jobs as building blocks to let us build more complicated jobs, while also making it easier to update how one portion of the build works ( for example, maybe we switch to a different technology for compiling, which might change how Jenkins would compile the code ).
For example, the Project-PR would be split into the following:
Project-PULL -> Project-SetupBuildEnv -> Project-PartialUnitTests
(BuildFlow) (Normal Job) (Normal Job)
The SetupBuildEnv would just pull down any NPM or Composer requirements, and set up the directories required for testing and building. PartialUnitTests then run, and report it's results back up to the
The Project-DEV could be split up like so:
Project-DEV -> Project->SetupBuildEnv -> Project-FullUnitTests -> Project-Compile -> Project-Minify -> Project-DeployQA -> Project-FullIntegrationTests
This way, the parts of the build process that are shared ( in this case, Project-SetupBuildEnv ) can be easily shared between jobs, reducing duplication, and making it easier to update a step in the build process without having to remember EVERY job that uses that step.
Right now, I'm using the Shared Workspace plugin so that all the steps use the same workspace. However, I'm running into an issue with that: it's not actually using one workspace. What's happening is that the Build Flow job will get a directory ( eg: /sharedspace/shared_one ), and download the code from GitHub into there. Then it will trigger the DSL, which starts up the 'SetupBuildEnv' job. But instead of working inside the same directory, it will get a directory with a name like "/sharedspace/shared_one#2", and run the build setup task in there. Then when it goes to do the third step ( unit testing ), it fails, because now it's got a third directory ( /sharedspace/shared_one#3 ), but that directory didn't have the setup run, so the required node and composer modules are missing. What's weird is that it looks like the Shared Workspace plugin is copying the first shared workspace to another directory and incrementing a counter ( the #N part of the directory name ) and giving that to the other jobs to work in.
So, question time:
is there a way to fix the Shared Workspace plugin so that it's actually only using one directory for each job?
if not, is it possible to have the Clone Workspace plugin take an argument, so I can specify what archived workspace to use instead of using the dropdown?
another possiblity: would using the shared workspace plugin, but use the "Local subdirectory for repo (optional)" option in the advanced git job options to specify the directory to use?
failing all that, is there some other way to set up a build pipeline that can share jobs with other pipelines that I've missed out on?
In my experience, even if you do get this working, this might not be a scalable way to go longer term. We've found the shared workspace plugin entirely a bad idea for long / complex builds (similar reasons to yours - but also: scaling across dozens of slaves becomes hard suddenly). Arguably the idea is slightly against the spirit of modern scalable CI.
I'd instead delegate more to your build tools, be they Maven / Gradle, Ant, even Grunt, whatever. If you want to keep these builds truly modular, but can't afford to rebuild at each step (we decided full independence was worth wasting a few minutes per build) then perhaps look at creating useful artefacts at key stages - in your case, minified assets TARs, library JARs, or maybe webjars, or whatever, and deploy them to a (Maven?) repo.
Later build steps in your pipeline can quickly, easily, and repeatably pull the latest (or named version) assets from this centralised repo, and continue with the build process.
An alternative (with similarities) is to build one or more assets, but only promote them after increasing numbers of tests are run, which can be done in separate builds coordinated by your build flow, using the Promoted Builds plugin etc.
In a CI environment, what exactly is considered a broken build?
There are several answers I can imagine (any combination of compiles, tests pass, metrics are in range, documentation exists etc.) , but I am not sure which of these are cannoncial.
For example, just today it happened to me that I actually checked in all code changes but forgot to commit the Visual Studio project file, thus breaking the unit tests. (even though I literally triple checked my commit, as it's a public OSS project on google code).
I was easily able to fix this in under a minute after my first commit, but should I consider myself a buildbreaker now?
How do you configure your CI environment: Is every revision built or only the newest version after each complete build, or do you use time based checking for new revs?
Ideally, you have
automated script that is scheduled to run each night to build the application from source code.
Scripts to copy the binaries to a directory/set of directories from which another script can be run to deploy the application if it is running in your environment, or used to create deliverables for a customer.
Automated suite of tests that run and verify all components pass all tests.
Automated script that verifies that the build has been built correctly.
Automated script/monitoring system that sends out an alert if the verification script fails.
When an alert is generated by the above process, then that is considered "breaking the build".
But since procedures/processes can vary from company to company, there may be alternate definitions. In some places it may be breaking the unit tests. Others it may be checking in source code that results in the code being unabl to compile.
Breaking the build is commiting any changes that make it impossible (or possible but not smart to do) to deploy the project.
Fixing a broken build does not repair the broken build, but makes it possible to create a new non-broken build.
I configure my CI-server to create a minimal build on every commit, and create a maximum build every period of time. The period depends on the number of people working on the project (more people is more commits) and the build duration (You can run your unit test suite every time, but the 30min. acceptance test suite once or twice a day).
Breaking a build is preventing any user that relies on the same standard tools and set of code than your CI environnment to get a compiling and running system.
If your coworkers cannot compile the system when they're up to date, because some config is missing, the build is broken, isn't it ?
If you coworkers cannot be confident that unit tests pass, because one of them is flaky, the build is broken, isn'it ?
If you have automated performance tests, and you're project have to be optimized, I would go as far as too say that if you code doesn't run fast enough, you've broken the build (but that is arguable)
I would not be so strongly minded about code coverage, or other metrics for example.
Breaking the build can happen. CI is just there to make sure it doesn't happen too much on the day you're supposed to ship ;)
for us, we use the term "breaking the build" any time the test suite fails after a new commit.
so, in your instance, yes you would have broken the build (according to our company at least)
As long as you broke the build because of a simple human error (like forgetting to commit something) and as long as this is an exception and not the rule, I would say it's OK. As long as you take care to fix this quickly :)
On the other hand, if somebody breaks builds on a regular basis by not executing the complete build locally on his/her machine before committing, then this is a sign of an ill-disciplined team member who doesn't really care for other team members and for the development process.
My experience is that one effective way to make people be more careful is to set up your CI server to send an email when (and only when) the status of the build changes, with the "culprit" on the To: list and the rest of the team on the Cc:. I guess you can could call it a "shame factor" ;)
A broken build is anything that doesn't pass the automated test suite.
You're a build breaker. ;) That's not really a big deal as long as you fix it fast. The whole point of a CI environment is to catch bugs, not make people afraid to commit.
My company builds the tip of each branch that is either in production or next up for production. We do this on every commit and each day at 4am. We're using Mercurial, so commit here means pushing changes to the integration repo.