How do I manually trigger/run scripts from .gitlab-ci to test?
I want to test if the scripts I added in .gitlab-ci are working fine (from Gitlab)? Instead of waiting for a particular code to be pushed and enter into CI/CD pipeline of Gitlab, I want to run manually from Gitlab instead of local machines.
In other words, How to test.gitlab-ci.yml
In the Pipelines screen there is a green button called Run pipeline, here you can pick a branch you want to run.
Related
I have a continuously triggered Azure DevOps release definition that deploys a compiled Angular app to a web server and also runs Cypress e2e tests. The Cypress tests must run against the source code, so that means I need an artifact that is able to reference the same commit that was used to create the compiled app.
I created a GitHub artifact that gets the source code, but I can't figure out how to automatically change the branch/commit to whatever was used for the compiled app (it could be any branch and the names are not known ahead of time). Azure forces me to enter a hard-coded branch name and it does not accept wildcards or variables.
If I could simply use the variable ${Release.Artifacts.{alias}.SourceBranchName} for the default branch, I think I'd achieve my goal. Since Azure doesn't allow this, is there an alternative approach that accomplishes the same thing?
Note 1: The "Default version" dropdown has an option "Specify at the time of release creation", but that is intended for manual releases and can't be used for triggered ones, so no luck there.
Note 2: I looked into publishing the source code as an artifact, but it currently has almost 70,000 files and it adds more than an hour to the build step, so that also is not an option.
When you use the Release Pipeline artifacts, you are not able to set the pipeline variable in the Default branch field. This field only supports hard-coded.
is there an alternative approach that accomplishes the same thing?
The variable:$(Release.Artifacts.{alias}.SourceBranchName) can be used in the Release Pipeline agent job.
Workaround:
You can remove the Github artifacts and then add a Command Line task/PowerShell task/ Bash task to run the git command to clone the target repo.
For example:
git clone -b $(Release.Artifacts.{alias}.SourceBranchName) GithubRepoURL
PowerShell sample:
In this case, the script will use the same branch as Build Artifacts to checkout the source code.
I have four environments that I deploy to.
I also have four different code branches that we use to deploy code from.
We constantly switch the branches we use to deploy on these environments.
One time I want to build and deploy a daily branch on my test environment.
Later I want to build and deploy a enhancements branch on the same test environment.
Next I want to build and deploy the daily branch on my test2 environment.
I think you get the picture
We are currently using a manual process to pull from the branch we want deployed, then zip it up and push it to AWS code deploy.
Using Azure DevOps pipeline and release what is the easiest method to allow me to switch to use different branchs on different environments.
I currently have a successful setup in Azure DevOps that performs a gradle build, creates the artifact and then lets me push it over to AWS CodeDeploy on one of my environments. I just can't seem to figure out a way to eastily swtich the branch without creating tons of Azure pipelines and releases.
Thanks all!
Where you manually trigger a build pipeline by clicking Queue or Run Pipeline, A new windows shown as below will be prompted which allows you to switch the branches.
If you want to automatically deploy different branch to different environment. You can push the build artifacts over to AWS CodeDeploy in a release pipeline and set the branch filters. Please refer to below steps:
1, set branch filter in the build pipeline as shown in below screenshot which will build the selected branched. Check here for more information about triggers.
2, create a release pipline to push build artifacts over to AWS CodeDeploy.
And Set the Artifact filters which will only allow the artifacts built from the specified branch to be deployed to this tage.
You could use a queue time variable to specify the branch name you would like to use on your build pipeline. You would need to:
Edit your build pipeline and create the variable on the "variables" tab. Make sure to mark the "Settable at queue time" check
variable creation
Update the source of your build pipeline, to specify the new variable under the "Default branch" option. It would look something like this:
pipeline source
RUN your pipeline. Before finally clicking on RUN, you will be able to specify the desire branch:
set variable value
Hope this works
I have problem when build a C++ project in azure-pipeline, some dll files was access denied.
So I need to run a batch script to stop services which using these dll
I was try to run my script at pre-build event in Visual Studio but it execute after Initialize Job task, so not work
Are there any way to run script in Initialize Job?
Are there any way to run script in Initialize Job?
I am afraid there is no such way to run script in Initialize Job at this moment. The Prepare job/Initialize Job are some of the predefined work built into the pipeline. We could not add our custom script in or before those jobs.
So, to resolve this issue, we have to find the reason for this error and resolve it.
Generally, this error will most likely make an appearance if your Build and Release Agent goes offline or a build is interrupted due to an issue on the machine itself, and where specific files have been created mid-flight within the Azure Devops directory. When Azure Devops/TFS then re-attempts a new build and to write to/recreate the files that already exist, it fails, and the above error is displayed.
The best resolution is to log in to the agent machine manually, navigate to the affected directory/file (in this example, C:\VSTS\_work\xxx\xx\.tmp) and delete the file/folder in question. Removing the offending items will effectively "clean slate" the next Build definition execution, which should then complete without issue.
Hope this helps.
I had to solve this exact same problem. The solution is not ideal, but it works.
I created two pipelines. The first pipeline does any required pre-build steps, like stopping services. The second pipeline is the actual build pipeline, and it gets triggered when the first pipeline finishes. (See the build triggers section in pipeline #2.)
My team has multiple Concourse pipelines and as we refactor tasks, we've realized the need to test our actual pipelines.
We already test our tasks by using environment variables enabling task scripts to be run locally, but the pipeline yaml is another matter.
What is the best way to accomplish testing of the pipeline itself?
You can use the Concourse Pipeline Resource to monitor the git repository where you keep your pipeline config. Whenever the pipeline resource detects a change, it will automatically run a fly set-pipeline to update the config in your running Concourse installation. From there, it's easy to script tests against the updated pipeline that is now running in your Concourse installation.
fly validate-pipeline is pretty useful, running that against pipelines before merging has caught a few bugs in "obviously correct" changes for me.
If you want to test the whole pipeline before merging you need to make sure that the data it's using is static and working (no sense in failing the pipeline if it's the repo that's broken), and that there are no side effects (like notifications) shared between the 'real pipeline' and the 'test pipeline'. I suspect that as long as you're careful with the restrictions, you could make it work, but it would have to be designed in the context of your existing pipelines and infrastructure.
I have a 'master' server (docker container actually) where I want to install Jenkins in order to link it (with webhook) with a github repo, so every time a developer pushes code, jenkins will auto-pull and build the code.
The thing is that there are an arbitrary number of extra 'slave' servers that need to have the exact same code as the master.
I am thinking of writing an Ansible playbook to be executed by Jenkins everytime the webhook runs and send the code to the slaves.
Can Jenkins do something like this?
Do I need to make the same setup to all the slaves with Jenkins and webhooks?
EDIT:
I want to run a locustio master server on the server that is going to have jenkins. My load tests are going to be pulled from Github there, but the same code needs to reside in the slaves in order to run in distributed mode.
The short answer to your question is that Jenkins certainly has the ability to run Ansible playbooks. You can add a build-step to the project that is receiving the web hook that will run the playbook.
jenkins could trigger another job even on slaves. Then if i get correctly your issue , you just need something like that. https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin
You could build your job by name trigger. Also there is another useful plugin called artifactory. This manages your packages and serves. This mean , you can build your code for once and share to slave and slaves could access your build and runs job.