What postgres connection string on AWS docker multicontainer app? - postgresql

What Postgres connection string should I use if I run a docker multicontainer app via Dockerrun.aws.js on AWS?
I have a Node.JS/Postgres/docker web-application. Postgres runs in its own container and so does the app. Locally, the app runs ok. When I deploy it to AWS via ECR and BeanStalk, the application successfully deploys and runs, but the web-app doesn't connect to Postgres.
In docker-compose.yaml, the host in the connection string is the name of the container (in my case it would be db). That doesn't work with AWS. Neither does localhost or 127.0.0.1.
Here is my Dockerrun.aws.js:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "db-data",
"host": {
"sourcePath": "/data/db"
}
}
],
"containerDefinitions": [
{
"name": "db",
"image": "db_image_name",
"essential": true,
"memory": 128,
"environment": [
{
"name": "POSTGRES_USER",
"value": "postgres"
},
{
"name": "POSTGRES_PASSWORD",
"value": "password"
},
{
"name": "PGDATA",
"value": "/data/db/pgdata"
}
],
"portMappings": [
{
"hostPort": 5432,
"containerPort": 5432
}
],
"mountPoints": [
{
"sourceVolume": "db-data",
"containerPath": "/data/db"
}
]
},
{
"name": "app",
"image": "app_image_name",
"essential": true,
"memory": 128,
"environment": [
{
"name": "NODE_ENV",
"value": "production"
},
{
"name": "DB_HOST",
"value": "db"
},
{
"name": "DB_PORT",
"value": "5432"
},
{
"name": "DB_PASSWORD",
"value": "password"
}
],
"links": [
"db"
],
"portMappings": [
{
"hostPort": 80,
"containerPort": 3000
}
]
}
]
}
What should I put in the environment variable DB_HOST? (DB_HOST together with DB_PORT are used by the app to construct the connection string.) Thanks a lot.

Beanstalk and Dockerrun.aws.json format offer you a possibility to link containers from the same definition file with the following syntax:
"links": ["some-name"]
In your case you can add a link to "db" and postgres will be available in your app container under db. You don't even need to map container ports if you don't want to expose your postgres container to the world.
You can see an example in use in following docs: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html#create_deploy_docker_v2config_dockerrun

Related

Database persistence in ECS Fargate

We are running a MongoDB container in ECS Fargate. The ECS services starts up fine and database is accessible.
But how do I make it persistent as we all know that container storage is ephemeral? I have tried to mount efs file system following AWS documentation. The service gets created and the task runs fine but I am unable to access the database anymore. As in, I am unable to even login.
As far as EFS goes, I have tried both versions - unencrypted as well as encrypted.
My Task definition for MongoDB is as follows: (have removed the part where username and password parameters are passed)
{
"taskDefinitionArn": "arn:aws:ecs:us-east-2:1234567890:task-definition/mongo_efs_test_1215:2",
"containerDefinitions": [
{
"name": "mongo_efs_container_1215",
"image": "public.ecr.aws/docker/library/mongo:latest",
"cpu": 0,
"portMappings": [
{
"containerPort": 27017,
"hostPort": 27017,
"protocol": "tcp"
}
],
"essential": true,
"entryPoint": [],
"command": [],
"environment": [],
"mountPoints": [
{
"sourceVolume": "efs-disk",
"containerPath": "/data/db"
}
],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/mongo_efs_test_1215",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"family": "mongo_efs_test_1215",
"taskRoleArn": "arn:aws:iam::1234567890:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::1234567890:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 2,
"volumes": [
{
"name": "efs-disk",
"efsVolumeConfiguration": {
"fileSystemId": "fs-dvveweo837af981fa",
"rootDirectory": "/",
"transitEncryption": "DISABLED",
"authorizationConfig": {
"iam": "DISABLED"
}
}
}
],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "ecs.capability.efsAuth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "ecs.capability.efs"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.25"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "512",
"runtimePlatform": {
"operatingSystemFamily": "LINUX"
},
"registeredAt": "2022-12-15T12:06:34.477Z",
"registeredBy": "arn:aws:iam::1234567890:root",
"tags": []
}
Can someone please guide me? I am struggling with this for a really long time. Any help is really appreciated.
Thank you.

Can't write to bind mount on ECS Fragate when using non-root user

I'm using ECS with Fargate and trying to create a bind mount on ephemeral storage but my user (id 1000) is unable to write to the volume.
According to the documentation, it should be possible.
However the documentation mentions:
By default, the volume permissions are set to 0755 and the owner as root. These permissions can be customized in the Dockerfile.
So in my Dockerfile I have
ARG PHP_VERSION=8.1.2-fpm-alpine3.15
FROM php:$PHP_VERSION as php_base
ENV APP_USER=app
ENV APP_USER_HOME=/home/app
ENV APP_USER_UID=1000
ENV APP_USER_GID=1000
ENV APP_HOME=/srv/app
# create the app user
RUN set -eux; \
addgroup -g $APP_USER_GID -S $APP_USER; \
adduser -S -D -h "$APP_USER_HOME" -u $APP_USER_UID -s /sbin/nologin -G $APP_USER -g $APP_USER $APP_USER
RUN set -eux; \
mkdir -p /var/run/php; \
chown -R ${APP_USER}:${APP_USER} /var/run/php; \
# TODO THIS IS A TEST
chmod 777 /var/run/php
# ...
FROM php_base as php_prod
# ...
VOLUME ["/var/run/php"]
USER $APP_USER
WORKDIR "${APP_HOME}"
ENTRYPOINT ["/usr/local/bin/docker-php-entrypoint"]
CMD ["php-fpm"]
And in my task definition I have:
{
"taskDefinitionArn": "arn:aws:ecs:us-east-1:999999999999:task-definition/app:2",
"containerDefinitions": [
{
"name": "app-php",
"image": "999999999999.dkr.ecr.us-east-1.amazonaws.com/php:latest",
"cpu": 0,
"portMappings": [],
"essential": true,
"environment": [
{
"name": "DATABASE_PORT",
"value": "3306"
},
{
"name": "DATABASE_USERNAME",
"value": "app"
},
{
"name": "DATABASE_NAME",
"value": "app"
},
{
"name": "DATABASE_HOST",
"value": "db.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com"
}
],
"mountPoints": [
{
"sourceVolume": "php_socket",
"containerPath": "/var/run/php",
"readOnly": false
}
],
"volumesFrom": [],
"secrets": [
{
"name": "DATABASE_PASSWORD",
"valueFrom": "arn:aws:ssm:us-east-1:999999999999:parameter/db-password"
}
],
"readonlyRootFilesystem": false,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "app"
}
},
"healthCheck": {
"command": [
"docker-healthcheck"
],
"interval": 10,
"timeout": 3,
"retries": 3,
"startPeriod": 15
}
},
{
"name": "app-proxy",
"image": "999999999999.dkr.ecr.us-east-1.amazonaws.com/proxy:latest",
"cpu": 0,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "php_socket",
"containerPath": "/var/run/php",
"readOnly": false
}
],
"volumesFrom": [],
"dependsOn": [
{
"containerName": "app-php",
"condition": "HEALTHY"
}
],
"readonlyRootFilesystem": false,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "app"
}
},
"healthCheck": {
"command": [
"curl",
"-s",
"localhost/status-nginx"
],
"interval": 10,
"timeout": 3,
"retries": 3,
"startPeriod": 15
}
}
],
"family": "bnc-stage-remises-app",
"taskRoleArn": "arn:aws:iam::999999999999:role/app-task",
"executionRoleArn": "arn:aws:iam::999999999999:role/app-exec",
"networkMode": "awsvpc",
"revision": 2,
"volumes": [
{
"name": "php_socket",
"host": {}
}
],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "ecs.capability.container-health-check"
},
{
"name": "ecs.capability.container-ordering"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "ecs.capability.secrets.ssm.environment-variables"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "2048",
"registeredAt": "2022-02-15T15:54:47.452Z",
"registeredBy": "arn:aws:sts::999999999999:assumed-role/OrganizationAccountAccessRole/9999999999999999999",
"tags": [
{
"key": "Project",
"value": "project-name"
},
{
"key": "Environment",
"value": "stage"
},
{
"key": "ManagedBy",
"value": "Terraform"
},
{
"key": "Client",
"value": "ClientName"
},
{
"key": "Namespace",
"value": "client-name"
},
{
"key": "Name",
"value": "app"
}
]
}
However, in ECS I keep getting
2022-02-15T20:36:14.679Z [15-Feb-2022 20:36:14] ERROR: unable to bind listening socket for address '/var/run/php/php-fpm.sock': Permission denied (13) app-php
2022-02-15T20:36:14.679Z [15-Feb-2022 20:36:14] ERROR: unable to bind listening socket for address '/var/run/php/php-fpm.sock': Permission denied (13) app-php
2022-02-15T20:36:14.679Z [15-Feb-2022 20:36:14] ERROR: FPM initialization failed app-php
2022-02-15T20:36:14.679Z [15-Feb-2022 20:36:14] ERROR: FPM initialization failed app-php
Turns out /var/run is a symlink to /run in my container and ECS wasn't able to handle this. I changed my setup to use /run/php instead of /var/run/php and everything works perfectly.

Create user on mongo contianer launch using Dockerrun.aws.json on elastic beanstalk

I am trying to deploy a multi container application on elastic beanstalk.
The application composes of three services:
FastAPI
Mongodb
Redis
My docker-compose file looks like this:
version: "3"
services:
web:
image: pushpvashisht/web:latest
environment:
- MONGO_HOST=${MONGO_HOST}
- REDIS_HOST=${REDIS_HOST}
depends_on:
- mongo
- redis
ports:
- "8000:8000"
volumes:
- /mnt2/web-logs/:/web-logs/
mongo:
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=example
ports:
- "27017:27017"
volumes:
- /mnt2/mongo_data:/data/db/
- ./src/helper_scripts/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
redis:
image: redis:latest
ports:
- "6379:6379"
volumes:
- /mnt2/redis_data:/data
The volume (./src/helper_scripts/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro) mounted for mongo service is used to create a user for the 'example' db.
mongo-init.js:
db.createUser(
{
user: "admin",
pwd: "admin",
roles: [
{
role: "readWrite",
db: "example"
}
]
}
);
How do I translate this docker-compose file to the Dockerrun.aws.json file to deploy the application on elastic beanstalk?
Referring to the docs, I have written this much:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "web",
"image": "pushpvashisht/web:latest",
"essential": true,
"environment": [
{
"name": "MONGO_HOST",
"value": "${MONGO_HOST}"
},
{
"name": "REDIS_HOST",
"value": "${REDIS_HOST}"
}
],
"links": [
"mongo",
"redis"
],
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
],
"mountPoints": [
{
"containerPath": "/web-logs/",
"sourceVolume": "Mnt2Web-Logs"
}
]
"memory": 128
},
{
"name": "mongo",
"image": "mongo:latest",
"essential": true,
"environment": [
{
"name": "MONGO_INITDB_ROOT_USERNAME",
"value": "${MONGO_INITDB_ROOT_USERNAME}"
},
{
"name": "MONGO_INITDB_ROOT_PASSWORD",
"value": "${MONGO_INITDB_ROOT_PASSWORD}"
},
{
"name": "MONGO_INITDB_DATABASE",
"value": "example"
}
],
"portMappings": [
{
"containerPort": 27017,
"hostPort": 27017
}
],
"mountPoints": [
{
"containerPath": "/data/db/",
"sourceVolume": "Mnt2Mongo_Data"
},
{
"containerPath": "/docker-entrypoint-initdb.d/mongo-init.js",
"sourceVolume": "_SrcHelper_ScriptsMongo-Init_Js"
}
],
"memory": 128
},
{
"name": "redis",
"image": "redis:latest",
"essential": true,
"portMappings": [
{
"containerPort": 6379,
"hostPort": 6379
}
],
"mountPoints": [
{
"containerPath": "/data",
"sourceVolume": "Mnt2Redis_Data"
}
],
"memory": 256
}
],
"volumes": [
{
"host": {
"sourcePath": "/mnt2/web-logs/"
},
"name": "Mnt2Web-Logs"
},
{
"host": {
"sourcePath": "/mnt2/mongo_data"
},
"name": "Mnt2Mongo_Data"
},
{
"host": {
"sourcePath": "./src/helper_scripts/mongo-init.js"
},
"name": "_SrcHelper_ScriptsMongo-Init_Js"
},
{
"host": {
"sourcePath": "/mnt2/redis_data"
},
"name": "Mnt2Redis_Data"
}
]
}

Push data from bitbucket pipeline to ECS as volume

I'm trying to set up a Bitbucket pipeline to run PHP application. The application itself will be running using separate containers for nginx and php-fpm so both will need application source code directory to operate similar to this snippet:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "php-app",
"host": {
"sourcePath": "/var/app/current/php-app"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/proxy/conf.d"
}
}
],
"containerDefinitions": [
{
"name": "php-app",
"image": "php:fpm",
"essential": true,
"memory": 128,
"mountPoints": [
{
"sourceVolume": "php-app",
"containerPath": "/var/www/html",
"readOnly": true
}
]
},
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": [
"php-app"
],
"mountPoints": [
{
"sourceVolume": "php-app",
"containerPath": "/var/www/html",
"readOnly": true
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
}
]
}
]
}
(source: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecstutorial.html )
For this reason I don't want to embed application source within images (I could run COPY to both nginx and php-fpm but it wouldn't be nice) and I would like to have source stored within a volume.
The question is: how can I push application after it's built (I have custom bitbucket agent with composer and stuff) to ECR so that I can use it in task definition?

Error mounting volumes on Openshift (Next gen)

I'm testing the new Openshift platform based on Docker and Kubernetes.
I've created a new project from scratch, then when I try to deploy a simple MongoDB service (as well with a python app), I got the following errors in the Monitoring section in Web console:
Unable to mount volumes for pod "mongodb-1-sfg8t_rob1(e9e53040-ab59-11e6-a64c-0e3d364e19a5)": timeout expired waiting for volumes to attach/mount for pod "mongodb-1-sfg8t"/"rob1". list of unattached/unmounted volumes=[mongodb-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "mongodb-1-sfg8t"/"rob1". list of unattached/unmounted volumes=[mongodb-data]
It seems a problem mounting the PVC in the container, however the PVC is correctly created and bounded:
oc get pvc
Returns:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
mongodb-data Bound pv-aws-9dged 1Gi RWO 29m
I've deployed it with the following commands:
oc process -f openshift/templates/mongodb.json | oc create -f -
oc deploy mongodb --latest
The complete log from Web console:
The content of the template that I used is:
{
"kind": "Template",
"apiVersion": "v1",
"metadata": {
"name": "mongo-example",
"annotations": {
"openshift.io/display-name": "Mongo example",
"tags": "quickstart,mongo"
}
},
"labels": {
"template": "mongo-example"
},
"message": "The following service(s) have been created in your project: ${NAME}.",
"objects": [
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_DATA_VOLUME}"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "${DB_VOLUME_CAPACITY}"
}
}
}
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"annotations": {
"description": "Exposes the database server"
}
},
"spec": {
"ports": [
{
"name": "mongodb",
"port": 27017,
"targetPort": 27017
}
],
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
}
}
},
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"annotations": {
"description": "Defines how to deploy the database"
}
},
"spec": {
"strategy": {
"type": "Recreate"
},
"triggers": [
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"mymongodb"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "",
"name": "mongo:latest"
}
}
},
{
"type": "ConfigChange"
}
],
"replicas": 1,
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
},
"template": {
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"labels": {
"name": "${DATABASE_SERVICE_NAME}"
}
},
"spec": {
"volumes": [
{
"name": "${DATABASE_DATA_VOLUME}",
"persistentVolumeClaim": {
"claimName": "${DATABASE_DATA_VOLUME}"
}
}
],
"containers": [
{
"name": "mymongodb",
"image": "mongo:latest",
"ports": [
{
"containerPort": 27017
}
],
"env": [
{
"name": "MONGODB_USER",
"value": "${DATABASE_USER}"
},
{
"name": "MONGODB_PASSWORD",
"value": "${DATABASE_PASSWORD}"
},
{
"name": "MONGODB_DATABASE",
"value": "${DATABASE_NAME}"
}
],
"volumeMounts": [
{
"name": "${DATABASE_DATA_VOLUME}",
"mountPath": "/data/db"
}
],
"readinessProbe": {
"timeoutSeconds": 1,
"initialDelaySeconds": 5,
"exec": {
"command": [ "/bin/bash", "-c", "mongo --eval 'db.getName()'"]
}
},
"livenessProbe": {
"timeoutSeconds": 1,
"initialDelaySeconds": 30,
"tcpSocket": {
"port": 27017
}
},
"resources": {
"limits": {
"memory": "${MEMORY_MONGODB_LIMIT}"
}
}
}
]
}
}
}
}
],
"parameters": [
{
"name": "NAME",
"displayName": "Name",
"description": "The name",
"required": true,
"value": "mongo-example"
},
{
"name": "MEMORY_MONGODB_LIMIT",
"displayName": "Memory Limit (MONGODB)",
"required": true,
"description": "Maximum amount of memory the MONGODB container can use.",
"value": "512Mi"
},
{
"name": "DB_VOLUME_CAPACITY",
"displayName": "Volume Capacity",
"description": "Volume space available for data, e.g. 512Mi, 2Gi",
"value": "512Mi",
"required": true
},
{
"name": "DATABASE_DATA_VOLUME",
"displayName": "Volumne name for DB data",
"required": true,
"value": "mongodb-data"
},
{
"name": "DATABASE_SERVICE_NAME",
"displayName": "Database Service Name",
"required": true,
"value": "mongodb"
},
{
"name": "DATABASE_NAME",
"displayName": "Database Name",
"required": true,
"value": "test1"
},
{
"name": "DATABASE_USER",
"displayName": "Database Username",
"required": false
},
{
"name": "DATABASE_PASSWORD",
"displayName": "Database User Password",
"required": false
}
]
}
Is there any issue with my template ? Is it a OpenShift issue ? Where and how can I get further details about the mount problem in OpenShift logs ?
So, I think you're coming up against 2 different issues.
You're template is setup to pull from the Mongo image on Dockerhub (specified by the blank "namespace" value. When trying to pull the mongo:latest image from Dockerhub in the Web UI, you are greeted by a friendly message notifying you that the docker image is not usable because it runs as root:
OpenShift Online Dev preview has been having some issues related to PVC recently (http://status.preview.openshift.com/). Specifically this reported bug at the moment, https://bugzilla.redhat.com/show_bug.cgi?id=1392650. This may be a cause for some issues, as the "official" Mongo image on OpenShift is also failing to build.
I would like to direct you to an OpenShift MongoDB template, not the exact one used in the Developer Preview, but should hopefully provide some good direction going forward! https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_examples/files/examples/v1.4/db-templates/mongodb-persistent-template.json