GITHUB Action Connect to Redshift Database Server - github

Currently we use circleci for build and deployment and we are moving from circleci to github actions and I'm struck on one specific step.
In circleci we connect to our production redshift database and execute bunch of SQL Queries. How I do the same using github action.
Currently in circleci, we use middleman node
&connect_to_middleman_aws_node
run:
name: Connects to middleman node to forward conection to redshift
command: | # Remember to use virtual-env
source /tmp/python_venv/bin/activate
ssh -nNT -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -L $REDSHIFT_PORT:$REDSHIFT_HOST:$REDSHIFT_PORT ubuntu#airflow.xxxxxxx.com
background: true
add_ssh_keys:
fingerprints:
- "0a:6e:61:b9:19:43:93:5c:8c:4c:7c:fc:6e:aa:74:89"
What is the equivalent in Github action. If anyone has done this, can you please me the sample code.

Related

Azure DevOps Pipeline local MariaDB

I want to migrate my github action pipeline to azure devops, unfortunally i wasn't able to find an alternative to the github action "ankane/setup-mariadb#v1".
For my pipline I need to create a local mariadb with a database loaded from a .sql file.
I also need to create a user for that database.
This was my code in my github pipeline:
- name: Installing MariaDB
uses: ankane/setup-mariadb#v1
with:
mariadb-version: ${{ matrix.mariadb-version }}
database: DatabaseName
- name: Creating MariaDB User
run: |
sudo mysql -D DatabaseName -e "CREATE USER 'Username'#localhost IDENTIFIED BY 'Password';"
sudo mysql -D DatabaseName -e "GRANT ALL PRIVILEGES ON DatabaseName.* TO 'Username'#localhost;"
sudo mysql -D DatabaseName -e "FLUSH PRIVILEGES;"
- name: Importing Database
run: |
sudo mysql -D DatabaseName < ./test/database.sql
Does anybody know if there is a alternative for azure devops pipelines?
Cheers,
Does anybody know if there is a alternative for azure devops
pipelines?
If the alternative you mentioned means some tasks in Azure DevOps pipeline can do the similar thing as 'ankane/setup-mariadb#v1' in GitHub, then the answer is NO.
DevOps doesn't have a 'build_in' task like this, even the marketplace also doesn't have a extension to do this.
So you have two ways:
1, If your pipeline based on Microsoft hosted agent, everything should be set up via command:
How to Install and Start Using MariaDB on Ubuntu 20.04
2, If your pipeline based on self hosted agent, then you can 'set up' the environment(MariaDB) before start the pipeline. And then use it in your DevOps pipeline.

Adding a Second Service with AWS Copilot

I've very familiar with doing all of this (quite tedious) stuff manually with ECS.
I'm experimenting with Copilot - which is really working - I have one service up really easily, but my solution has multiple services/containers.
How do I now add a second service/container to my cluster?
Short answer: change to your second service's code directory and run copilot init again! If you need to specify a different dockerfile, you can use the --dockerfile flag. If you need to use an existing image, you can use --image with the name of an existing container registry.
Long answer:
Copilot stores metadata in SSM Parameter Store in the account which was used to run copilot app init or copilot init, so as long as you don't change the AWS credentials you're using when you run Copilot, everything should just work when you run copilot init in a new repository.
Some other use cases:
If it's an existing image like redis or postgres and you don't need to customize anything about the actual image or expose it, you can run
copilot init -t Backend\ Service --image redis --port 6379 --name redis
If your service lives in a separate code repository and needs to access the internet, you can cd into that directory and run
copilot init --app $YOUR_APP_NAME --type Load\ Balanced\ Web\ Service --dockerfile ./Dockerfile --port 1234 --name $YOUR_SERVICE_NAME --deploy
So all you need to do is run copilot init --app $YOUR_APP_NAME with the same AWS credentials in a new directory, and you'll be able to set up and deploy your second services.
Copilot also allows you to set up persistent storage associated with a given service by using the copilot storage init command. This specifies a new DynamoDB table or S3 bucket, which will be created when you run copilot svc deploy. It will create one storage addon per environment you deploy the service to, so as not to mix test and production data.

Gitlab + GKE + Gitlab CI unable to clone Repository

I'm trying to user GitLab CI with GKE cluster to execute pipelines. I have the experience using Docker runner, but GKE is still pretty new to me, here's what I did:
Create GKE cluster via Project settings in GitLab.
Install Helm Tiller via GitLab Project settings.
Install GitLab Runner via GitLab Project settings.
Create gitlab-ci.yml with the following content
before_script:
- php -v
standard:
image: falnyr/php-ci-tools:php-cs-fixer-7.0
script:
- php-cs-fixer fix --diff --dry-run --stop-on-violation -v --using-cache=no
lint:7.1:
image: falnyr/php-ci:7.1-no-xdebug
script:
- composer build
- php vendor/bin/parallel-lint --exclude vendor .
cache:
paths:
- vendor/
Push commit to the repository
Pipeline output is following
Running with gitlab-runner 10.3.0 (5cf5e19a)
on runner-gitlab-runner-666dd5fd55-h5xzh (04180b2e)
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image falnyr/php-ci:7.1-no-xdebug ...
Waiting for pod gitlab-managed-apps/runner-04180b2e-project-5-concurrent-0nmpp7 to be running, status is Pending
Running on runner-04180b2e-project-5-concurrent-0nmpp7 via runner-gitlab-runner-666dd5fd55-h5xzh...
Cloning repository...
Cloning into '/group/project'...
remote: You are not allowed to download code from this project.
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#git.domain.tld/group/project.git/': The requested URL returned error: 403
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
Now I think that I should add a gitlab-ci-token user with password somewhere, not sure if it is supposed to work like this.
Thanks!
After reading more about the topic it seems that pipelines should be executed via HTTPS only (not SSH).
I enabled the HTTPS communication and when I execute the pipeline as the user in the project (admin that is not added to the project throws this error) it works without a problem.

AWS ECR authentication for compose service on drone 0.4

I'm using Drone 0.4 for as my CI. While trying to migrate from a self hosted private registry to AWS's ECS/ECR, I've come across an authentication issue when referencing these images in my .drone.yml as a composed service.
for example
build:
image: python:3.5
commands:
- some stuff
compose:
db:
image: <account_id>.dkr.ecr.us-east-1.amazonaws.com/reponame:latest
when the drone build runs it's erroring out, like it should, saying
Authentication required to pull from ecr. As I understand when you authenticate for AWS ECR you use something like aws-cli's ecr get-login which gives you a temporary password. I know that I could inject that into my drone secret file and use that value in auth_config but that would mean I'd have to update my secrets' file every twelve hours (or however long that token lasts). Is there a way for drone to perform the authentication process itself?
You can run the authentication command in the same shell before executing your build/compose command:
How we do it in our setup with docker is, we have this shell script part in out Jenkins pipeline(this shell script will work with or without Jenkins, all you have to do is configure your aws credentials):
`aws ecr get-login --region us-east-1`
${MAVEN_HOME}/bin/mvn clean package docker:build -DskipTests
docker tag -f ${DOCKER_REGISTRY}/c-server ${DOCKER_REGISTRY}/c-server:${RELEASE_VERSION
docker push ${DOCKER_REGISTRY}/c-server:${RELEASE_VERSION}
So while running the maven command which creates the image or the subsequent commands to push it in ECR, it uses the authentication it gets from the first command.

Ansible playbook: pipeline local cmd output (e,g. git archive) to server?

So my project has a special infrastructure, the server has only SSH connection, I have to upload my project code to server using SSH/SFTP everytime, manually. The server can not fetch.
Basically I need something like git archive master | ssh user#host 'tar -zxvf -' automatically done using playbook.
I looked at docs, local_action seems to work but it requires a local ssh setup. Are there other ways around?
How about something like this. You may have to tweak to suit your needs.
tasks:
- shell: git archive master /tmp/master.tar.gz
- unarchive: src=/tmp/master.tar.gz dest={{dir_to_untar}}
I still do not understand it requires a local ssh setup in your question.