Run gitlab-ci for another project - github

We have a application which is managed by third-party. They use Github to store source code. My company now use Gitlab for internal project. We setup Gitlab mirror to pull source code (incl branch: dev, stagging, master) from Github. It's working well now.
Now my manager want to setup Gitlab pipeline for automation process: build, test, deploy...I do it by commit .gitlab-ci.yml file to branch. But it's not good. After Gitlab pull code from Gitlab, it will overwrite my gitlab-ci file and remove it. So I must find another solution
Below is my idea now:
Create seperate project. It only contains gitlab-ci file
Detect changes on any branch in mirror repo
Trigger pipeline
Anyone has other idea for this case, please help me
P/S: third-party don't agree to add my gitlab-ci file into their repo in Github.

Related

How to copy files from one git repo to another git repo in Azure pipelines task?

There is one public source repo in github where all the source code is present. There is another github repo of mine which has some configuration files.
I want to run some tests of source repo using the configuration file present in my github repo using Azure pipeline task.
How can I checkout to source repo of github first and then do initial setup like build in that repo? And after that copy configuration files from my another github repo to the source repo directory and run tests of source repo.
I want to do these steps in Azure yaml pipelines as from azure release pipelines not all the artifacts are accessible.
Checking out multiple repos is possible, also with GitHub as a source, but don't forget to setup a GitHub service connection.
More info and options about this see: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops#specify-multiple-repositories
Since you want the GitHub repo to trigger the Azure DevOps pipeline, please check out the feature that is available since October 2022:
https://learn.microsoft.com/en-us/azure/devops/pipelines/ecosystems/github-actions?view=azure-devops

Azure Data Factory Deployment changes not reflecting after integrate with git repository

I have two instances of azure data factory. One is PROD and another is DEV.
I have my DEV ADF integrated to git repository and will be doing all developments in this adf instance.
Once the code is ready for production deployment, will follow CI/CD steps to deploy the DEV ADF into PROD.
This functionality is working fine.
Now recently I had few changes in my PROD ADF instance by upgrading the ADLS Gen1 to Gen2 and few alterations on pipelines also. These changes has been directly updated in PROD instance of ADF.
Now I have to deploy these changes in DEV instance in order to make both instances in sync, before proceeding with further developments.
In order to achieve this i have followed below steps.
Remove git integration of DEV ADF instance.
Integrate PROD ADF into a new git repository and do a publish
Build Pipelines and Release pipelines has been executed and deployed PROD into DEV
I could see the changes in both PROD and DEV are in sync.
Now i want to re integrate the DEV ADF in order to proceed with further developments
When I re integrate the DEV ADF into the collaboration branch (master) of existing dev instance repository as shown below, I could see the discrepancies in pipeline count and linked service count.
The pipelines and linked services which are deleted from PROD is still there in DEV ADF master branch.
When I remove the git integration of DEV ADF, now both DEV and PROD ADF are in sync.
I tried to integrate the DEV ADF into a new branch of same dev repository as shown below,
Still I could see the deleted pipelines and linked services which are deleted from production is also available in the dev adf.
It seems like the pipelines and linked services which are changed are getting updated, but the items deleted are not removed from the dev master repository.
Is there any way to cleanup master branch and import only the existing resources at the time of git re-integration?
The only possible way i could found is to create new repository instead of re integrating to the existing one, but it seems like difficult to keep on changing repository and also already created branches and changes in the existing repository will be lost.
Is there any way when I re-integrate the repository with ADF, it should take only the existing resources into master branch of repository, not merging with the existing code in master?
These things happen. ADF Git integrations are a bit different, so there's a learning curve to getting a hold of them. I've been there. Never fear. There is a solution.
There are two things to address here:
Fixing your process so this doesn't happen again.
Fixing the current problem.
The first place you went wrong was making changes directly in PRD. You should have made these changes in DEV and promoted according to standard process.
The next places you went wrong were removing DEV from Git and then adding PRD to Git. PRD should not be connected to Git at any point, and you shouldn't be juggling Git integrations. It's dangerous and can lead to lost work.
Ensure that you do not repeat these mistakes, and you will prevent complicating things like this going forward.
In order to fix the current issues it's worth pointing out that with ADF Git integrations, you don't have to use the ADF editor for everything. You are totally able to manipulate Git repos cloned to your local file system with standard Git tools, and this is going to be the key to digging yourself out. (It's also what you should have done in the first place to retrofit PRD changes back into DEV.)
Basically, if your PRD master contains the objects as you want them, then first clone that branch to your local file system. Elsewhere on your drive, clone a feature branch of your DEV repo to the file system. In order to bring these in sync, you just copy the PRD master contents and paste them into the DEV feature branch directory and push changes. Now, this DEV feature branch matches PRD master. A merge and pull request from this DEV feature branch to DEV master will then bring DEV master in sync with PRD master (assuming the merge is done correctly).
Even when not having to do things like this, it can be helpful to have your ADF Git repo cloned locally so you have specific control over things. There are times when ADF orphans objects, and you can clean them up via the file system and Git tools without having to wrestle the ADF editor as such.

Jenkins - MultiBranch Pipeline : Could not fetch branches from source

I am trying to create a Multibranch Pipeline project in Jenkins with GitHub.
In the status page of the project I have the message that says that there are no branch with the Jenkins file and not build the project, as we can see in this image:
When I scan the repository, the log shows
I configured the project with a GitHub source, as we can see in this image:
The URI of the repository,
Where in the root there is the Jenkinsfile., is:
https://github.com/AleGallagher/Prueba1
Could you help me please? I've spent many hours with this and I don't know what to do.
Thank you!
To use Multibranch pipeline it is mandatory to have Jenkinsfile in repository branch.
How it works?
The Multibranch pipeline job first scans all your repository branches and looks for Jenkinsfile, if it is able to met the criteria it will proceed by executing the Jenkinsfile code and go ahead with build, if it wont be able to find the Jenkinsfile then you will find in console that "criteria not met, jenkinsfile not found in branch".
For jenkinsfile kindly visit https://jenkins.io/doc/book/pipeline/jenkinsfile/
Recommendation:-
Choose git as an option for Branch source.
Set credentials- give preference to ssh. put privatekey as jenkins side
Make sure you have correct access to Repository, if not give access by put key of same user (ssh public-key in repository)
Let me know if issue still persists.

Not authorized to execute any SonarQube analysis when building pull request from a forket repo on Travis CI

I'm setting up a project with Travis CI and SonarQube.com, everything goes smoothly when a pull request comes out of a branch from the repository but it is failing when Travis runs a build off a pull request from a forked repository.
A build out of a PR from the repository: https://travis-ci.org/PistachoSoft/dummy-calculator/builds/162905730
A build out of a PR from a forked repository: https://travis-ci.org/PistachoSoft/dummy-calculator/builds/162892678
The repository: https://github.com/PistachoSoft/dummy-calculator
As it can be seen in the build log this is the error:
You're not authorized to execute any SonarQube analysis. Please contact your SonarQube administrator.
Things I've tried out but didn't work out:
Updating the sonar token.
Using an encrypted token granted by another person from the organization.
Granting 'sonar-users' and 'Anyone' the 'Execute Analysis' permission on the SonarQube project.
What can I do to fix this?
First, I raise your attention on one important point: you should not run a "standard" SonarQube analysis on PR - otherwise your project on SonarQube.com will be "polluted" by intermediate analyses that have nothing to do with each other. Standard analyses must be executed only on the main development branch - which is usually the "master" branch. Please read the runSonarQubeAnalysis.sh file of our sample projects to see how to achieve that.
Now, why your attempt does not work? Simply because the SONAR_TOKEN environment variable (that you've set as "secure" in your YML file) will not be decoded by Travis when the PR is coming "from the outside world" (i.e when it's not a PR of your own). This is a security constraint to prevent anybody to fork your repo, update the YML file with a echo $SONAR_TOKEN, submit a PR and genlty wait that Travis executes it to unveil the secured environment variable.
Analyzing "external" PR is something that we'll soon be working on so that this is easy, straightforward and yet secured for OSS projects to benefit from this feature.

Automated go app deployment

I'm wondering if there are any convenient ways to automate deployment of code to a live server in GO, either standard built-in methods, or otherwise.
I want something google app engine like, I just run the command and it uploads to the server and triggers a restart.
(Ultimately I want a git commit to trigger a rebuild and redeploy, but thats for down the track in the future)
I recommend Travis CI + Heroku.
You can deploy to heroku directly with just a git push, but I like to use Travis to build and run the tests before that.
There are some guides online but I'll try to go directly to the point:
What you will need?
Github account
Travis account (linked with github, free if open source)
Empty Heroku app (Free dyno works great)
Setup
In your github repo, create the following files:
.travis.yml (more info on the Travis CI documentation)
Procfile
.go-dir
After that go to your Travis account, add your repository and enabled the build for it.
Here is a sample minimal config file content (based on my app that I deploy to heroku):
.travis.yml
language: go
go:
- tip
deploy:
provider: heroku
buildpack: https://github.com/kr/heroku-buildpack-go.git
api_key:
secure: <your heroku api key encripted with travis encrypt>
on: master
Procfile
worker: your-app-binary
.go-dir
your-app-binary
Procfile and .go-dir are heroku configs so it can vary if you are deploying a web app, you can read more at the heroku documentation
One important and easily missed point is the build pack, without it the deploy will not work.
Read the Travis docs to see how to encrypt the heroku key
How it works?
Basically, every push to your repository will trigger the Travis CI build, if it passes it will deploy the app to heroku, so you set this up once and build + deploy is just a push away ;)
Also Travis will build and updated the status of all Pull Requests to your repository automagically.
To see my config and build, please take a look at my Travis build and my repository with my working configs