How to connect to Cloud SQL from Cloud Build to run knex database migration? - google-cloud-sql

I investigated this question on StackOverflow but unfortunately to me nothing helps.
I have cloudbuild.yaml file
steps:
- name: 'node:14.16.0'
entrypoint: 'yarn'
id: yarn-install
args: ['install']
waitFor: ["-"]
- name: gcr.io/cloud-builders/yarn
id: proxy-install
entrypoint: sh
args:
- "-c"
- "wget https://storage.googleapis.com/cloudsql-proxy/v1.23.0/cloud_sql_proxy.linux.amd64 -O /workspace/cloud_sql_proxy && chmod +x /workspace/cloud_sql_proxy"
waitFor: ["-"]
- id: migrate
name: gcr.io/cloud-builders/yarn
env:
- NODE_ENV=$_NODE_ENV
- DB_NAME=$_DB_NAME
- DB_USER=$_DB_USER
- DB_PASSWORD=MY_FAKE_PASSWORD
- CLOUD_SQL_CONNECTION_NAME=$_CLOUD_SQL_CONNECTION_NAME
entrypoint: sh
args:
- "-c"
- "(./workspace/cloud_sql_proxy -dir=/workspace -instances=$_CLOUD_SQL_CONNECTION_NAME & sleep 2) && yarn run knex migrate:latest"
timeout: "1200s"
waitFor: ["yarn-install", "proxy-install"]
I want to connect to my Cloud SQL database to apply schema migration by using yarn run knex migrate:latest.
But it fails on the migrate step
Logs from Cloud Build
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Command failed with exit code 1.
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1145:16)
Error: connect ENOENT /cloudsql/project:us-east1:project-posgresql1/.s.PGSQL.5432
Using environment: production
Working directory changed to /workspace/src/infrastructure/knex
Requiring external module ts-node/register
$ ./node_modules/knex/bin/cli.js --knexfile=./src/infrastructure/knex/knex.config.ts migrate:latest --env production migrate:latest
yarn run v1.22.5
sh: 1: ./workspace/cloud_sql_proxy: not found
Already have image (with digest): gcr.io/cloud-builders/yarn
I don't know how to debug it correctly... Could you help me to find the root cause of the problem?
P.S.
...#cloudbuild.gserviceaccount.com has the following roles
Cloud Build Service Account
Cloud SQL Client
Service Account User
Cloud Run Admin
Secret Manager Secret Accessor
P.S.S knex is a JavaScript query builder for SQL-like databases

By design, you need to run the cloud-sql prowy in the same step as the proxy install step. However, I never tested with the waitFor["-"], it might be a workaround. But this solution should work for you
- name: gcr.io/cloud-builders/yarn
id: proxy-install
entrypoint: sh
env:
- NODE_ENV=$_NODE_ENV
- DB_NAME=$_DB_NAME
- DB_USER=$_DB_USER
- DB_PASSWORD=MY_FAKE_PASSWORD
- CLOUD_SQL_CONNECTION_NAME=$_CLOUD_SQL_CONNECTION_NAME
args:
- "-c"
- |
wget https://storage.googleapis.com/cloudsql-proxy/v1.23.0/cloud_sql_proxy.linux.amd64 -O /workspace/cloud_sql_proxy && chmod +x /workspace/cloud_sql_proxy
(./workspace/cloud_sql_proxy -dir=/workspace -instances=$_CLOUD_SQL_CONNECTION_NAME & sleep 2) && yarn run knex migrate:latest
timeout: "1200s"
Have a try and let me know

Related

Error running an Docker container or docker compose with postgres, golang and Debian 11, Agora appbuilder backend

I spun up a Debian 11 EC2 on AWS, and installed postgres 14.5 on it and docker and docker compose on it.I added a superuser to postgres of "admin' with a password. I created my docker-compose.yml file and a .env file.
When I try to use the docker-compose.yml file, I get:
sudo docker compose up -d
services.database.environment must be a mapping
When I build my docker container with
sudo docker build . -t tvappbuilder:latest
and then try to run it with:
sudo docker run -p 8080:8080 tvappbuilder:latest --env-file .env -it
Config Path .
4:47PM INF server/utils/logging.go:105 > logging configured fileLogging=true fileName=app-builder-logs logDirectory=./logs maxAgeInDays=0 maxBackups=0 maxSizeMB=0
4:47PM FTL server/cmd/video_conferencing/server.go:71 > Error initializing database error="pq: Could not detect default username. Please provide one explicitly"
Here are the dockers so far:
sudo docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 6e5f035abda5 18 hours ago 1.82GB
tvappbuilder latest 6166e24a47e0 21 hours ago 21.8MB
<none> <none> cedcaf2facd1 21 hours ago 1.82GB
hello-world latest feb5d9fea6a5 12 months ago 13.3kB
golang 1.15.1 9f495162f677 2 years ago 839MB
Here is the docker-compose.yml:
version: 3.7
services:
server:
container_name: server
build: .
depends_on:
- database
ports:
- 8080:8080
environment:
- APP_ID: $APP_ID
- APP_CERTIFICATE: $APP_CERTIFICATE
- CUSTOMER_ID: $CUSTOMER_ID
- CUSTOMER_CERTIFICATE: $CUSTOMER_CERTIFICATE
- BUCKET_NAME: $BUCKET_NAME
- BUCKET_ACCESS_KEY: $BUCKET_ACCESS_KEY
- BUCKET_ACCESS_SECRET: $BUCKET_ACCESS_SECRET
- CLIENT_ID: $CLIENT_ID
- CLIENT_SECRET: $CLIENT_SECRET
- PSTN_USERNAME: $PSTN_USERNAME
- PSTN_PASSWORD: $PSTN_PASSWORD
- SCHEME: $SCHEME
- ALLOWED_ORIGIN: ""
- ENABLE_NEWRELIC_MONITORING: false
- RUN_MIGRATION: true
- DATABASE_URL: postgresql://$POSTGRES_USER:$POSTGRES_PASSWORD#database:5432/$POSTGRES_DB?sslmode=disable
database:
container_name: server_database
image: postgres-14.5
restart: always
hostname: database
environment:
- POSTGRES_USER: $POSTGRES_USER
- POSTGRES_PASSWORD: $POSTGRES_PASSWORD
- POSTGRES_DB: $POSTGRES_DB
Here is the Dockerfile:
## Using Dockerfile from the following post: https://medium.com/#petomalina/using-go-mod-download-to-speed-up-golang-docker-builds-707591336888
FROM golang:1.15.1 as build-env
# All these steps will be cached
RUN mkdir /server
WORKDIR /server
COPY go.mod .
COPY go.sum .
# Get dependancies - will also be cached if we won't change mod/sum
RUN go mod download
# COPY the source code as the last step
COPY . .
# Build the binary
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o /go/bin/server /server/cmd/video_conferencing
# Second step to build minimal image
FROM scratch
COPY --from=build-env /go/bin/server /go/bin/server
COPY --from=build-env /server/config.json config.json
ENTRYPOINT ["/go/bin/server"]
and here is the .env file:
ENCRYPTION_ENABLED=0
POSTGRES_USER=admin
POSTGRES_PASSWORD=<correct pswd for admin>
POSTGRES_DB=tvappbuilder
APP_ID=<my real app ID>
APP_CERTIFICATE=<my real app cert>
CUSTOMER_ID=<my real ID>
CUSTOMER_CERTIFICATE=<my real cert>
RECORDING_REGION=0
BUCKET_NAME=<my bucket name>
BUCKET_ACCESS_KEY=<my real key>
BUCKET_ACCESS_SECRET=<my real secret>
CLIENT_ID=
CLIENT_SECRET=
PSTN_USERNAME=
PSTN_PASSWORD=
PSTN_ACCOUNT=
PSTN_EMAIL=
SCHEME=esports1_agora
ENABLE_SLACK_OAUTH=0
SLACK_CLIENT_ID=
SLACK_CLIENT_SECRET=
GOOGLE_CLIENT_ID=
ENABLE_GOOGLE_OAUTH=0
GOOGLE_CLIENT_SECRET=
ENABLE_MICROSOFT_OAUTH=0
MICROSOFT_CLIENT_ID=
MICROSOFT_CLIENT_SECRET=
APPLE_CLIENT_ID=
APPLE_PRIVATE_KEY=
APPLE_KEY_ID=
APPLE_TEAM_ID=
ENABLE_APPLE_OAUTH=0
PAPERTRAIL_API_TOKEN=<my real token>
According to this: https://pkg.go.dev/github.com/lib/pq
I probably should not need to use pq, and instead use postgres directly, but it appears it
was set up this way.
Many thanks for any pointers!
As per the comments there are a number of issues with your setup.
The first is the error services.database.environment must be a mapping when running docker compose up -d. This is caused by lines like - APP_ID: $APP_ID in your docker-compose.yml - use either APP_ID: $APP_ID or - APP_ID=$APP_ID as per the documentation.
A further issue is that you installed Postgres on the bare OS and are then using a postgres container. You only need to do one or the other (but if using docker you will want to use a volume or mount for the Postgres data (otherwise it will be lose when the container is rebuilt).
There are probably further issues but the above should get you started.

waiting service database running before others services running in Docker [duplicate]

This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 3 years ago.
I am trying to run my app which depends_on my Postgresql in Docker
let say my database PostgreSQL not running now
and in my docker-compose.yml:
version: "3"
services:
myapp:
depends_on:
- db
container_name: myapp
build:
context: .
dockerfile: Dockerfile
restart: on-failure
ports:
- "8100:8100"
db:
container_name: postgres
restart: on-failure
image: postgres:10-alpine
ports:
- "5555:5432"
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: 12345678
POSTGRES_DB: dev
when I try docker-compose up -d yes it created the postgres and then create that myapp service
but it seems my Postgresql is not running yet, after finish install and running myapp,
it said:
my database server not running yet
how to make myapp running until that db service know that my db running ??
The documentation of depends_on says that:
depends_on does not wait for db to be “ready” before starting myapp - only until it have been started.
So you'll have to check that your database is ready by yourself before running your app.
Docker has a documentation that explains how to write a wrapper script to do that:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Then you can just call this script before running your app in your docker-compose file:
command: ["./wait-for-postgres.sh", "db", "python", "app.py"]
There are also tools such as wait-for-it, dockerize or wait-for.
However these solutions has some limitations and Docker says that:
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason.
This method will be more resilient.
Here is how I use a retry strategy in javascript:
async ensureConnection () {
let retries = 5
const interval = 1000
while (retries) {
try {
await this.utils.raw('SELECT \'ensure connection\';')
break
} catch (err) {
console.error(err)
retries--
console.info(`retries left: ${retries}, interval: ${interval} ms`)
if (retries === 0) {
throw err
}
await new Promise(resolve => setTimeout(resolve, interval))
}
}
}
Please have a look at: https://docs.docker.com/compose/startup-order/.
Docker-compose won't wait for your database, you need a way to check it externally (via script or retrying the connection as Mickael B. proposed). One of the solutions proposed in the above link is a wait-for.sh utility script - we used it in a project and it worked quite well.

concourse git resource error: getting the final child's pid from pipe caused "EOF"

when trying to pull a git resource we are getting an error
runc run: exit status 1: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\""
we are using oracle linux - release 7.6. Docker version 18.03.1-ce.
we have followed the instructions on https://github.com/concourse/concourse-docker. we have tried with older versions of concourse (4.2.0 & 4.2.3). we can see the workers are up using fly.
we found this: https://github.com/concourse/concourse/issues/4021 on github which had a similar issue but couldn't find the relating story on stack overflow which the answerer had mentioned.
our docker compose file:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_DB: concourse
POSTGRES_USER: concourse_user
POSTGRES_PASSWORD: concourse_pass
web:
image: concourse/concourse
command: web
links: [db]
depends_on: [db]
ports: ["61111:8080"]
volumes: ["<path to repo folder>/keys/web:/concourse-keys"]
environment:
CONCOURSE_EXTERNAL_URL: <our url>
CONCOURSE_POSTGRES_HOST: db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
worker:
image: concourse/concourse
command: worker
privileged: true
depends_on: [web]
volumes: ["<path to repo folder>/keys/worker:/concourse-keys"]
links: [web]
stop_signal: SIGUSR2
environment:
CONCOURSE_TSA_HOST: web:2222
we expected the resource to pull as the connectivity to the repo is in place and verified.
Not sure about your second issue with volumes, but I solved the original problem by setting user.max_user_namespaces parameter to 15000:
sysctl -w user.max_user_namespaces=15000
The solution was found here: https://github.com/docker/docker.github.io/issues/7962
This issue was fixed by updating the kernal from 3.1.x to 4.1.x. we have a new issue: failed to create volume on all our pipelines. i will update if i find a solution to this too

ERROR: yaml.parser.ParserError: while parsing a block mapping

I'm building Iroha for which i'm running a script for environment setup which is internally calling the docker-compose.yml, where i"m getting the error:
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "/home/cdac/iroha/docker/docker-compose.yml", line 3, column 5
expected <block end>, but found '<scalar>'
in "/home/cdac/iroha/docker/docker-compose.yml", line 13, column 6
docker-compose.yml file is showing below.
services:
node:
image: hyperledger/iroha:develop-build
ports:
- "${IROHA_PORT}:50051"
- "${DEBUGGER_PORT}:20000"
environment:
- IROHA_POSTGRES_HOST=${COMPOSE_PROJECT_NAME}_postgres_1
- IROHA_POSTGRES_PORT=5432
- IROHA_POSTGRES_USER=iroha
- IROHA_POSTGRES_PASSWORD=helloworld
- CCACHE_DIR=/tmp/ccache
export G_ID=$(id -g $(whoami))
export U_ID=$(id -g $(whoami))
user: ${U_ID:-0}:${G_ID:-0}
depends_on:
- postgres
tty: true
volumes:
- ../:/opt/iroha
- ccache-data:/tmp/ccache
working_dir: /opt/iroha
cap_add:
- SYS_PTRACE
security_opt:
- seccomp:unconfined
postgres:
image: postgres:9.5
environment:
- POSTGRES_USER=iroha
- IROHA_POSTGRES_PASSWORD=helloworld
command: -c 'max_prepared_transactions=100'
volumes:
ccache-data:
any help will be appreciate, thanks in advance.
These lines are not belongs to the docker-compose syntax
export G_ID=$(id -g $(whoami))
export U_ID=$(id -g $(whoami))
Also this line wont be able to work as expected
user: ${U_ID:-0}:${G_ID:-0}
You should write your own shell script and use it as an entry point for the docker container (this should be done in the Dockerfile step) then run a container directly from the image that you have created without the need to assign a user or export anything within the docker-compose as it will be executed once your container is running.
Check the following URL which contains more explanation about the allowed keywords in docker-compose: Compose File: Service Configuration Reference
#MostafaHussein I removed the above 3 lines then executed the run-iroha-dev.sh script, and it started to work. it attached me to /opt/iroha in docker container and downloaded hyperledger/iroha:develop-build and iroha images and launched two containers.
is it same what you are suggesting?

Ansible postgresql_db task fails after a very long pause

The following ansible task (in a vagrant VM) fails :
- name: ensure database is created
postgresql_db: name={{dbname}}
sudo_user: postgres
the task pauses for a few minutes before failing
the vagrant VM is a centos6.5.1,
the tasks output is :
TASK: [postgresql | ensure database is created] *******************************
fatal: [192.168.78.6] => failed to parse:
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo via ansible, key=glxzviadepqkwddapvjheeuillbdakly] password:
FATAL: all hosts have already failed -- aborting
I have verified that postgres is prooperly installed
by doing vagrant ssh and connecting vial psql.
I also validated that I can do a "sudo su postgres" within the VM ...
======== update
It looks like the problem is the sudo_user: postgres, because removing the
above postgres tasks and replacing with this one causes the same problem :
- name: say hello from postgress
command: echo "hello"
sudo_user: postgres
the output is exactly the same as above, so it's really a problem of
ansible doing a sudo_user on centos6.5
one interesting observation, although I can do "sudo su postgres" from
inside the vm
when I call "psql" (as the postgres user) I get the message :
could not change directory to "/home/vagrant": Permission denied
but the psql shell still starts successfully
======== conclusion
Problem was fixed by changing to a stock centos box,
lesson learned : when using ansible/vagrant, only use stock OS images...
I am using the wait for host:
- local_action: wait_for port=22 host="{{PosgresHost}}" search_regex=OpenSSH delay=1 timeout=60
ignore_errors: yes
PS:
I think you should use gather_facts: False and do setup after ssh is up.
Example main.yml:
---
- name: Setup
hosts: all
#connection: local
user: root
gather_facts: False
roles:
- main
Example roles/main/tasks/main.yml
- debug: msg="System {{ inventory_hostname }} "
- local_action: wait_for port=22 host="{{ inventory_hostname}}" search_regex=OpenSSH delay=1 timeout=60
ignore_errors: yes
- action: setup
ansible-playbook -i 127.0.0.1, main.yml
PLAY [Setup] ******************************************************************
TASK: [main | debug msg="System {{ inventory_hostname }} "] *******************
ok: [127.0.0.1] => {
"msg": "System 127.0.0.1 "
}
TASK: [main | wait_for port=22 host="{{ inventory_hostname}}" search_regex=OpenSSH delay=1 timeout=60] ***
ok: [127.0.0.1 -> 127.0.0.1]
PLAY RECAP ********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0