Auto deploy with gitlab continuous integration - deployment

I want to setup auto deployment from a gitlab branch to a digitalocean droplet. I created a runner and exec git clone in droplet. But now I can't configure my gitlab-ci.yml for autodeploy from branch "dev" to droplet.
My gitlab-ci.yml:
image: python:3.5
staging:
type: deploy
only:
- dev
script:
# there must be some kind of connection to the droplet to further code executed already on server
- git pull
# - server restart
How do I connect to the server in gitlab-ci.yml to make "git pull" command?

Ok, I solved the problem. First, we need to add the GitLab CI runner to the server. You can see in the documentation how to do it. Then all commands from gitlab-ci.yml will execute on the server. So "git pull" command also will be executed on the server where the runner was started.

Related

CircleCI cannot find Serverless Framework after serverless installation

I'm trying to use Serverless Compose to deploy multiple services to AWS via CircleCI. I have 3 test services for a POC, and so far deploying these to a personal AWS account from the terminal works just fine. However, when I configure it to go through CircleCI with a config.yml file, I get this error:
Could not find the Serverless Framework CLI installation. Ensure Serverless Framework is installed before continuing.
I'm puzzled because my config.yml file looks like this:
version: 2.1
orbs:
aws-cli: circleci/aws-cli#3.1.1
serverless-framework: circleci/serverless-framework#2.0.0
node: circleci/node#5.0.2
jobs:
deploy:
parameters:
stage:
type: string
executor: serverless-framework/default
steps:
- checkout
- aws-cli/install
- serverless-framework/setup
- run:
command: serverless config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
name: Configure serverless
- run:
command: npm install #serverless/compose
name: Install #serverless/compose
- run:
command: serverless deploy --stage << parameters.stage >>
name: Deploy staging
workflows:
deploy-staging:
jobs:
- node/test:
version: 17.3.0
- deploy:
context: aws-*******-developers
name: ******-sandbox-use1
stage: staging
The serverless framework is set up, the orb is present, but it says that it could not be found. All steps are successful until I get to deploy staging. I've been digging through documentation but I can't seem to find where it's going wrong with CircleCI. Does anyone know what I may be missing?
Turns out this required a weird fix, but it's best to remove the following:
The orb serverless-framework: circleci/serverless-framework#2.0.0
The setup step in the job - serverless-framework/setup
The Configure Serverless step
Once these are removed, modify the Install #serverless/compose step to run npm install and install all the packages. Then run npx serverless deploy instead of serverless deploy. This fixed the problem for me.

Azure DevOps push to GitLab

Within Azure DevOps I am trying to create a Command Line Script which is pushing the actual DevOps Repo to GitLab.
git clone https://xxxx#dev.azure.com/xxxx/DBS/_git/xxx
git remote add --mirror=fetch secondary https://oauth2:%pat%#gitlab.com/username/gitlabrepo.git
git fetch origin
git push secondary --all
In the env parameter %pat% I am referencing the Personal Access Token from GitLab.
When running the pipeline with the Comman Line Script I am getting the following error:
start to push repo to gitlab
Cloning into 'gitlabrepo'...
remote: HTTP Basic: Access denied
fatal: Authentication failed for 'https://gitlab.com/***/gitlabrepo.git/'
##[debug]Exit code: 128
##[debug]Leaving Invoke-VstsTool.
##[error]Cmd.exe exited with code '128'.
How could be this achieved?
Make sure the commands work locally in the git bash.
Run git config --global--unset credential.helper command git add command
If the issue persists, try run git remote set-url origin https://usernameHere:personalAccessTokenHere#gitlab.com/usernameHere/projectNameHere
If you use self-hosted agent, go to the agene machine, and remove the credential in Credential Manager.
The Avure DevOps agent is running this on a ubuntu-18.04 Machine. It seems that the connection to gitlab with the PAT fails because this is a corporate gitlab account and have some firewall restriction.Because from my local machine connected via VPN to the corporate network it's working. Git bash locally works fine. Error is because this machine started by the Pipeline Agent is not into the VPN of the company.

Gitlab + GKE + Gitlab CI unable to clone Repository

I'm trying to user GitLab CI with GKE cluster to execute pipelines. I have the experience using Docker runner, but GKE is still pretty new to me, here's what I did:
Create GKE cluster via Project settings in GitLab.
Install Helm Tiller via GitLab Project settings.
Install GitLab Runner via GitLab Project settings.
Create gitlab-ci.yml with the following content
before_script:
- php -v
standard:
image: falnyr/php-ci-tools:php-cs-fixer-7.0
script:
- php-cs-fixer fix --diff --dry-run --stop-on-violation -v --using-cache=no
lint:7.1:
image: falnyr/php-ci:7.1-no-xdebug
script:
- composer build
- php vendor/bin/parallel-lint --exclude vendor .
cache:
paths:
- vendor/
Push commit to the repository
Pipeline output is following
Running with gitlab-runner 10.3.0 (5cf5e19a)
on runner-gitlab-runner-666dd5fd55-h5xzh (04180b2e)
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image falnyr/php-ci:7.1-no-xdebug ...
Waiting for pod gitlab-managed-apps/runner-04180b2e-project-5-concurrent-0nmpp7 to be running, status is Pending
Running on runner-04180b2e-project-5-concurrent-0nmpp7 via runner-gitlab-runner-666dd5fd55-h5xzh...
Cloning repository...
Cloning into '/group/project'...
remote: You are not allowed to download code from this project.
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#git.domain.tld/group/project.git/': The requested URL returned error: 403
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
Now I think that I should add a gitlab-ci-token user with password somewhere, not sure if it is supposed to work like this.
Thanks!
After reading more about the topic it seems that pipelines should be executed via HTTPS only (not SSH).
I enabled the HTTPS communication and when I execute the pipeline as the user in the project (admin that is not added to the project throws this error) it works without a problem.

AWS ECR authentication for compose service on drone 0.4

I'm using Drone 0.4 for as my CI. While trying to migrate from a self hosted private registry to AWS's ECS/ECR, I've come across an authentication issue when referencing these images in my .drone.yml as a composed service.
for example
build:
image: python:3.5
commands:
- some stuff
compose:
db:
image: <account_id>.dkr.ecr.us-east-1.amazonaws.com/reponame:latest
when the drone build runs it's erroring out, like it should, saying
Authentication required to pull from ecr. As I understand when you authenticate for AWS ECR you use something like aws-cli's ecr get-login which gives you a temporary password. I know that I could inject that into my drone secret file and use that value in auth_config but that would mean I'd have to update my secrets' file every twelve hours (or however long that token lasts). Is there a way for drone to perform the authentication process itself?
You can run the authentication command in the same shell before executing your build/compose command:
How we do it in our setup with docker is, we have this shell script part in out Jenkins pipeline(this shell script will work with or without Jenkins, all you have to do is configure your aws credentials):
`aws ecr get-login --region us-east-1`
${MAVEN_HOME}/bin/mvn clean package docker:build -DskipTests
docker tag -f ${DOCKER_REGISTRY}/c-server ${DOCKER_REGISTRY}/c-server:${RELEASE_VERSION
docker push ${DOCKER_REGISTRY}/c-server:${RELEASE_VERSION}
So while running the maven command which creates the image or the subsequent commands to push it in ECR, it uses the authentication it gets from the first command.

How to enable minion to connect to git repository using saltstack and capistrano

I am trying to create run my rails application on ec2 using saltstack and capistrano.
Here's what I have successfully completed so far.
Using salt cloud and salt master I am able to create a new minion instance and setup everything required for the application to run i.e. ruby, rails, unicorn, mysql etc.
I have done proper configuration for capistrano. when I try to deploy I see the following error.
DEBUG [ed84c6ab] Command: ( GIT_ASKPASS=/bin/echo GIT_SSH=/pathto/git-ssh.sh /usr/bin/env git ls-remote -h git#github.com:somehost/somerepo.git )
DEBUG [ed84c6ab] Warning: Permanently added 'github.com,ip' (RSA) to the list of known hosts.
DEBUG [ed84c6ab] Permission denied (publickey).
DEBUG [ed84c6ab] fatal: Could not read from remote repository.
DEBUG [ed84c6ab]
DEBUG [ed84c6ab] Please make sure you have the correct access rights
DEBUG [ed84c6ab] and the repository exists.
DEBUG [ed84c6ab] Finished in 12.600 seconds with exit status 128 (failed).
So this means that from my local capistrano is able to connect to the minion but when it tries to checkout git repo it fails.
I know this is happening because the ssh public key of the minion is not added to the github.
so the goal is.
run salt cloud to create instance
run salt highstate to install everything required for app
run capistrano deploy to start the application
I would like to automate github authorization process too. I mean once the minion is created the minion should be able to clone git repo without any manual intervention.
I am confused as to this can be done through capistrano or saltstack.
I used github ssh forwarding to achieve this.
Here's the changes I made.
Steps to enable ssh forwarding for github
Then in capistrano deploy.rb file configure ssh forwarding by adding forward_agent: true
set :ssh_options, {
user: 'user',
auth_methods: %w(publickey),
port: <some port>,
forward_agent: true
}