Remove build from PR in Bitbucket using API - merge

i have a project with two submodules - one is for the source code(A) and the second one contains end to end tests(B). The problem is that the build of the source code is successful, but e2e tests are failing and this does not allow me to merge. Is there any way bitbucket to get result only from A submodule and to skip the result from submodule B? Is there any way to say which result to be used as validation?

I believe you can disable the e2e test from your plan spec file. Or you can disable the test scan from the UI as well from under the tasks tab in the job stage. But it would be good to investigate why your tests are failing.

Related

Automatically Tagging a PR Build in Azure Devops

I have branch validation in the form of a PR Build, which means I have duplicated my original build and removed some steps (such as pushing to my docker registry).
I would prefer to simply be able to automatically add a tag / some kind of identifier to a PR build and exclude the step on the original build using custom conditions.
Does anyone know if this is possible, and if so how to achieve it? I'd really rather not duplicate each and every build.
If I understand your question correctly, you would like to run a build step based on a custom condition. In this case, the custom condition is whether the build is a PR build or not.
You can check the pre-defined build variables available in Azure Devops here and you can see that there is a Build.Reason variable.
I am listing a few variables here.
Manual: A user manually queued the build.
IndividualCI: Continuous integration (CI) triggered by a Git push or a TFVC check-in.
PullRequest: The build was triggered by a Git branch policy that requires a build.
You can specify the condition in custom condition settings of your build step like this.
More examples available in the docs

Prevent a Jenkins build caused by git push in same job

I have written a Jenkinsfile for use in a GitHub Organization object in Jenkins. Part of the Jenkinsfile is error correcting, and often makes changes to the project. At the end of the pipeline, Jenkins commits the changes to our git repository.
Now that I have all of this working, I have realized that upon pushing the corrections to the repo, the same job that just ran and corrected the errors will run again, since it is triggered by webhooks. It wouldn't be an infinite loop because there would be no errors to correct after running the first time, but it would run one more time than it needs to.
The fact that the changes made by the job end up triggering the job is problematic because the pipeline includes a deploy step. Redeploying the same code twice in a row feels like the result of a poorly written pipeline. That said, short of searching the last commit message for keywords, I cannot figure out a good way to make commits from the job not trigger the job. Is there a less hacky way to solve this problem?
edit: I am aware of the ci-skip plugin, but I can't documentation for making it work in a declarative pipeline. I would prefer an approach that lets me do something with the default scm checkout step at the start of the build.
edit 2: I ended up just running a grep of the last commit message everytime the pipeline starts to see if the message contained the text [ci skip]. Its dirty, but it works if its placed in a setup stage of the pipeline.
In the configuration to checkout a git repo, you can specify there to ignore certain commit messages (advanced checkout behavior) Rather than using code to do it. However, I'm not sure if if a github org job can do the same thing.
Another option would be to ignore certain directories if the changes are confined to certain directories, and those directories aren't normally changes otherwise. Again, not sure if github org supports this.
Don't use webhook triggeers, possibly, but this isn't a preferred option.

How to trigger E2E tests when repositories are changed or pull requests added?

The only similar question I found is the following: Trigger travis ci builds if another git repository updates
Keep in mind that our E2E tests are full stack: we have a live server running, and our tests are run against the UI which hits the server. Nothing is stubbed or mocked.
Now, I can trigger in that way a travis rebuild, however problem arises when I have interdependency between branch and a different repository branch.
So let's say I have 3 repositories: backend, frontend and e2etests. If I create the new branch frontend/foo which requires also the backend/foo, there is no way e2e tests will ever pass because they will run one time with frontend/foo and backend/master and another time with frontend/master and backend/foo.
Did anyone face this problem previously? How did you deal with it?
After a lot of digging it was clear that there is no solution and everyone has their own built-in.
What worked for us is a git pre-push hook, environment variables pointing to our repositories and when you push it asks if you want to use a different branch for the UI project (if you are on the UI, asks if you want to use a different branch for the server project) and by default it uses the current branch for the project you are on. So it's quite fast on push, usually you'll just press ENTER an additional time, the two branches are basically the required dependencies and a travis build will be triggered with an API endpoint.
Works fine so far.

Build Flow vs Build Pipeline

I'm trying to split up a few Jenkins jobs using the Build Flow plugin so that instead of three monolithic jobs, we have three "starting points" that then use the DSL to trigger downstream jobs. I chose Build Flow over the Build Pipeline plugin because it seemed like it was a lot harder to share jobs between different pipelines ( ie, sharing the workspace of the multiple starting jobs with a single compile job ).
Previously, I had three jobs set up: Project-PR, Project-DEV, and Project-PROD.
Project-PR would build whenever a pull request happened in GitHub, and would just run a smaller subset of our unit tests, so that we could get quick verification that the PR is okay to merge.
Project-DEV would build whenever a feature branch was merged in GitHub into the main development branch, as well as having the ability to be manually triggered and given a different branch to pull. It would run the full suite of unit -- basically a sanity check that everything is still good. Then it would compile and minify, and push to a QA environment for testing, and then it would run the full suite of integration tests against that QA environment. This step was configured as a parametrized build, with the parameter being the name of the branch to pull, test, and push. It would push to and set up QA environment specific to that branch, so that we could QA multiple features without having to merge to development ( ie, feature-one.qa.example.com, feature-two.qa.example.com ).
Project-PROD would only ever be manually triggered, and would do the full unit and integration test suite, compile and minify the front-end code ( Less, JS, and CSS ), and push the built code into a special "release branch" in GitHub that can then be deployed -- we haven't quite reached the point of Jenkins being in charge of deployment.
Now, what I wanted to set up was to split the subtasks into their own jobs, so that it'd be easy to set up new jobs to without having to copy and paste all the build steps ( or copying the job and changing all the things that need to be unique ). This would let us do things like create a copy of the Project-DEV, but switch out the last job for one that deploys to a staging environment set up in the cloud. Or easily create a job that could report test results to a third party source, ie copy the results to a shared network folder or something. Or any number of things. The goal is basically to use these subtask jobs as building blocks to let us build more complicated jobs, while also making it easier to update how one portion of the build works ( for example, maybe we switch to a different technology for compiling, which might change how Jenkins would compile the code ).
For example, the Project-PR would be split into the following:
Project-PULL -> Project-SetupBuildEnv -> Project-PartialUnitTests
(BuildFlow) (Normal Job) (Normal Job)
The SetupBuildEnv would just pull down any NPM or Composer requirements, and set up the directories required for testing and building. PartialUnitTests then run, and report it's results back up to the
The Project-DEV could be split up like so:
Project-DEV -> Project->SetupBuildEnv -> Project-FullUnitTests -> Project-Compile -> Project-Minify -> Project-DeployQA -> Project-FullIntegrationTests
This way, the parts of the build process that are shared ( in this case, Project-SetupBuildEnv ) can be easily shared between jobs, reducing duplication, and making it easier to update a step in the build process without having to remember EVERY job that uses that step.
Right now, I'm using the Shared Workspace plugin so that all the steps use the same workspace. However, I'm running into an issue with that: it's not actually using one workspace. What's happening is that the Build Flow job will get a directory ( eg: /sharedspace/shared_one ), and download the code from GitHub into there. Then it will trigger the DSL, which starts up the 'SetupBuildEnv' job. But instead of working inside the same directory, it will get a directory with a name like "/sharedspace/shared_one#2", and run the build setup task in there. Then when it goes to do the third step ( unit testing ), it fails, because now it's got a third directory ( /sharedspace/shared_one#3 ), but that directory didn't have the setup run, so the required node and composer modules are missing. What's weird is that it looks like the Shared Workspace plugin is copying the first shared workspace to another directory and incrementing a counter ( the #N part of the directory name ) and giving that to the other jobs to work in.
So, question time:
is there a way to fix the Shared Workspace plugin so that it's actually only using one directory for each job?
if not, is it possible to have the Clone Workspace plugin take an argument, so I can specify what archived workspace to use instead of using the dropdown?
another possiblity: would using the shared workspace plugin, but use the "Local subdirectory for repo (optional)" option in the advanced git job options to specify the directory to use?
failing all that, is there some other way to set up a build pipeline that can share jobs with other pipelines that I've missed out on?
In my experience, even if you do get this working, this might not be a scalable way to go longer term. We've found the shared workspace plugin entirely a bad idea for long / complex builds (similar reasons to yours - but also: scaling across dozens of slaves becomes hard suddenly). Arguably the idea is slightly against the spirit of modern scalable CI.
I'd instead delegate more to your build tools, be they Maven / Gradle, Ant, even Grunt, whatever. If you want to keep these builds truly modular, but can't afford to rebuild at each step (we decided full independence was worth wasting a few minutes per build) then perhaps look at creating useful artefacts at key stages - in your case, minified assets TARs, library JARs, or maybe webjars, or whatever, and deploy them to a (Maven?) repo.
Later build steps in your pipeline can quickly, easily, and repeatably pull the latest (or named version) assets from this centralised repo, and continue with the build process.
An alternative (with similarities) is to build one or more assets, but only promote them after increasing numbers of tests are run, which can be done in separate builds coordinated by your build flow, using the Promoted Builds plugin etc.

Teamcity constantly rebuilding a pull request even though the last build failed and there have been no further commits

I have TeamCity set up to build Github pull requests as per these instructions: http://blog.jetbrains.com/teamcity/2013/02/automatically-building-pull-requests-from-github-with-teamcity/
I have added a VCS build trigger so that TeamCity polls Github looking for changes. This has no special settings enabled.
My build involves a shell script to set up dependencies and an Ant script to run PHPUnit. Right now, the Ant script fails (tests don't pass) with exit code 1. The build shows as a fail, and that should be that. However, every time the VCS build trigger looks for changes, it seems to find some, even though there have been no more commits. It then runs yet another build of the same merge commit and keeps repeating the build endlessly.
Why is it constantly thinking that there are changes when there are not?