Run DB migrations on cloud build connecting to cloud sql using private IP - google-cloud-sql

I am trying to setup db migrations for a Nodejs app on cloud build connecting to cloud sql with a private IP via cloud sql proxy.
Cloud SQL connection always fail from cloud build.
Currently I am running migration manually from a compute engine.
I followed this SO to setup the build steps.
Run node.js database migrations on Google Cloud SQL during Google Cloud Build
cloudbuild.yaml
steps:
- name: node:12-slim
args: ["npm", "install"]
env:
- "NODE_ENV=${_NODE_ENV}"
- name: alpine:3.10
entrypoint: sh
args:
- -c
- "wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.16/cloud_sql_proxy.linux.386 && chmod +x /workspace/cloud_sql_proxy"
- name: node:12
timeout: 100s
entrypoint: sh
args:
- -c
- "(/workspace/cloud_sql_proxy -dir=/workspace -instances=my-project-id:asia-south1:postgres-master=tcp:5432 & sleep 3) && npm run migrate"
env:
- "NODE_ENV=${_NODE_ENV}"
- "DB_NAME=${_DB_NAME}"
- "DB_PASS=${_DB_PASS}"
- "DB_USER=${_DB_USER}"
- "DB_HOST=${_DB_HOST}"
- "DB_PORT=${_DB_PORT}"
- name: "gcr.io/cloud-builders/gcloud"
entrypoint: "bash"
args:
[
"-c",
"gcloud secrets versions access latest --secret=backend-api-env > credentials.yaml",
]
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "--stop-previous-version", "-v", "$SHORT_SHA"]
timeout: "600s"
Error:
KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
Step #2: at Client_PG.acquireConnection (/workspace/node_modules/knex/lib/client.js:349:26)
Cloud build roles:
Cloud Build Service Account
Cloud SQL Admin
Compute Network User
Service Account User
Secret Manager Secret Accessor
Serverless VPC Access Admin
CLOUD SQL ADMIN API is enabled too.
Versions:
NPM libs:
"pg": "8.0.3"
"knex": "0.21.1"

The Cloud SQL Private IP feature uses internal IP addresses hosted in a VPC network, which are only accessible from other resources within the same VPC network.
Since Cloud Build does not support VPC Networks, it is not possible to connect from Cloud Build to the private IP of a Cloud SQL instance.
You might want to take a look at the official Cloud SQL documentation regarding this topic to choose another alternative that suits your use case.

Connecting to public cloud sql
I use docker-compose & cloud sql proxy.
setup docker-compose for cloud build, here.
create service account (json file).
docker-compose file:
version: '3.7'
services:
app:
build:
context: .
dockerfile: Dockerfile
restart: "no"
links:
- database
tty: true
volumes:
- app:/var/www/html
env_file:
- ./.env
depends_on:
- database
database:
image: gcr.io/cloudsql-docker/gce-proxy
restart: on-failure
command:
- "/cloud_sql_proxy"
- "-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:3306"
- "-credential_file=/config/sql_proxy.json"
volumes:
- ./sql_proxy.json:/config/sql_proxy.json:ro
volumes:
app:
cloudbuild.yml
- name: 'gcr.io/$PROJECT_ID/docker-compose'
id: Compose-build-cloudProxy
args: ['build']
- name: 'gcr.io/$PROJECT_ID/docker-compose'
id: Compose-up-cloudProxy
args: ['up', '--timeout', '1', '--no-build', '-d']
- name: 'bash'
id: Warm-up-cloudProxy
args: ['sleep', '5s']
- name: 'gcr.io/cloud-builders/docker'
id: Artisan-Migrate
args: ['exec', '-i', 'workspace_app_1', 'php', 'artisan', 'migrate']
- name: 'gcr.io/$PROJECT_ID/docker-compose'
id: Compose-down-cloudProxy
args: ['down', '-v']
build-success.png

I had the same issue as I am using AlloyDB and I was able to resolve it by setting up a worker pool under cloud build and I gave VPC access to the worker pool and the VPC has access to a serverless VPC that has access to AlloyDB so my migrations there were successful.
https://cloud.google.com/build/docs/private-pools/private-pools-overview

Related

ECS Fargate application container cannot establish connection with Postgres database container

I am trying to use ecs-cli to push a two container docker compose file up to FARGATE ECS. This is for a preview environment only. The first container is postgres:12 and the second is hasura/graphql-engine:v1.3.3
The docker-compose.yml looks like the following
version: '3'
services:
postgres:
image: postgres:12
ports:
- "5432:5432"
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgrespassword
logging:
driver: awslogs
options:
awslogs-group: tutorial
awslogs-region: us-east-1
awslogs-stream-prefix: postgres
graphql-engine:
image: hasura/graphql-engine:v1.3.3
ports:
- "80:80"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword#127.0.0.1:5432/postgres
## enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
## enable debugging mode. It is recommended to disable this in production
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
## uncomment next line to set an admin secret
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
logging:
driver: awslogs
options:
awslogs-group: tutorial
awslogs-region: us-east-1
awslogs-stream-prefix: hasura
volumes:
db_data:
The ecs-params.yml looks like the following
version: 1
task_definition:
ecs_network_mode: awsvpc
task_role_arn: "arn:aws:iam::***:role/ecsTaskExecutionRole"
task_execution_role: "arn:aws:iam::***:role/ecsTaskExecutionRole"
task_size:
cpu_limit: "256"
mem_limit: "512"
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-***"
- "subnet-***"
security_groups:
- "sg-***"
assign_public_ip: "ENABLED"
I am using the following command line call to trigger the push
ecs-cli compose --file docker-compose.yml --ecs-params ecs-params.yml --debug service up --deployment-max-percent 100 --deployment-min-healthy-percent 0 --region us-east-1 --cluster "{ARN CLUSTER VALUE}" --create-log-groups --launch-type "FARGATE"
In ECS I can see the new service created and its 1 Fargate task is spinning up. If I open the task, the containers move from PENDING -> RUNNING. After some time, the application container moves to STOPPED and then eventually the database container moves to STOPPED as well. Once this happens the task stops and a new task goes through the same cycle.
Here is the log for the application container
Here is the log for the database container
In the docker-compose I have tried changing the environment variable for the PG database connection string to both postgres://postgres:postgrespassword#127.0.0.1:5432/postgres and postgres://postgres:postgrespassword#localhost:5432/postgres, both result in the same issue.
Any idea what might be going on here? This is inspired from this article: https://dev.to/raphaelmansuy/10-minutes-to-deploy-a-docker-compose-stack-on-aws-illustrated-with-hasura-and-postgres-3f6e
The only difference is that article uses EC2, not Fargate.
try and add
links:
- postgres
to your graphql-engine service instead of depends_on which doesn't seem to work with AWS ECS.

Start database container with mounted config file in Github actions workflow

I'd like to run some integration tests against a real database, but I fail to start an additional container (for the db), because I need to mount a config file that is in my repo before it is starting up.
This is how I use the database on my local computer (docker-compose):
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-
I guess I cannot use a service container as the code is not available at the time the service container is started, therefore it won't pick up my configuration.
That's why I tried to run a container within my container using --network host (see below) and the container seems to be running fine, still I'm not able to curl it.
- name: Start DB for tests
run: |
docker run -d \
--network host \
-v ${{ github.workspace }}/dev/conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml \
-v ${{ github.workspace }}/dev/conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties \
tinkerpop/gremlin-server:3.5
- name: Test connection
run: |
curl "localhost:8182/gremlin?gremlin=g.V().valueMap()"
According to the documentation about the job context the id of the container network should be available ({{job.container.network}}) but is empty if you don’t use any job-level service or container.
Any ideas what I could try next?
This is what I ended up with: I'm now using docker-compose to run the integration tests (on my local computer as well as on GitHub Actions). I'm just mounting the entire directory/repo in the test container. Pulling the node:14-slim delays the build by some seconds, but I guess it's still the best option:
version: "3.2"
services:
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./data/:/opt/gremlin-server/data/
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties
- ./conf/initData.groovy:/opt/gremlin-server/scripts/initData.groovy
test:
image: node:14-slim
working_dir: /app
depends_on:
- gremlin-server
networks:
- graphdb_net
volumes:
- ../:/app
environment:
- NEPTUNE_CONNECTION_STRING=ws://gremlin-server:8182
command:
yarn test
networks:
graphdb_net:
driver: bridge
and I'm running them like this in my workflow:
- name: Spin up test environment
run: |
docker compose -f dev/docker-compose.yaml pull
docker compose -f dev/docker-compose.yaml build
- name: Run tests
run: |
docker compose -f dev/docker-compose.yaml run test
It's based on #DannyB's suggestion and his answer here so all props go to him.

docker-compose sonarqube and PostgreSQL in Azure App Service

I am trying to make a docker compose file that include a sonarqube and a Postgre database, and deploy it to Azure App service.
Below is the docker-compose file :
version: '3.3'
services:
db:
image: postgres
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: always
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
sonarqube:
depends_on:
- db
image: sonarqube
ports:
- "9000:9000"
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
restart: always
environment:
SONARQUBE_JDBC_URL: jdbc:postgresql://db:5432/sonar
SONARQUBE_JDBC_USERNAME: sonar
SONARQUBE_JDBC_PASSWORD: sonar
volumes:
postgresql:
postgresql_data:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
in my local machine, everything is working as expected and I can access Sonarqube. However, once I try to apply the docker-compose file in Azure App service I got the following entries in the log :
I tried to check if I can increase vm.max_map_count in App service, but I didn't find a way to do so.
How can I resolve this issue ? and is there at least a way to bypass this bootstrap check of vm.max_map_count ?
It's not possible to increase vm.max_map_count in Azure App Service. You can bypass this bootstrap check by adding the following line in the environment variables section of the SonarQube service:
SONAR_ES_BOOTSTRAP_CHECKS_DISABLE: 'true'

Docker Compose - Service name not working as environment variable

I'm trying to use docker-compose to setup a Spring Cloud project.
I'm using spring cloud configuration so I have a Configuration Server and some services.
For now, I have 3 services in my docker-compose.yml
version: '3'
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: 'root' # TODO: Change this
MYSQL_USER: 'user'
MYSQL_PASS: 'password'
MYSQL_ROOT_HOST: '%'
volumes:
- "db:/opt/mysql/docker:rw"
ports:
- "3307:3306"
config:
image: config-server
restart: always
depends_on:
- db
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://db:3306/ec_settings?createDatabaseIfNotExist=true&useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC
ports:
- "8100:8100"
gateway:
image: gateway
restart: always
depends_on:
- config
environment:
- CONFIG_URI=http://config:8100
ports:
- '8081:8080'
volumes:
db: {}
In gateway microservice, in bootstrap.yml, i have this setting
spring:
cloud:
config:
uri: ${CONFIG_URI}
When i put up the docker composer i see that gateway service is trying to fetch configuration from http://config:8100
Fetching config from server at : http://config:8100
So, the variable passes to Spring Boot but docker-compose does not replace the service name with its actual link.
The very strange thing is that SPRING_DATASOURCE_URL environment variable gets translated correctly in config service to connect to db service.
I finally solved it thanks to this link Docker - SpringConfig - Connection refused to ConfigServer
The problem was the service was trying to fetch the config url too early.
Solution is to put in bootstrap.yml this settings
spring:
cloud:
config:
fail-fast: true
retry:
max-attempts: 20

Get docker-compose up to only run certain containers

So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)