How to access postgres-docker container other docker container without ip address?
I want to store data in postgres by using myweb. in jar given host like localhost:5432/db..
Here my compose file:
version: "3"
services:
myweb:
build: ./myweb
container_name: app
ports:
- "8080:8080"
- "9090:9090"
networks:
- front-tier
- back-tier
depends_on:
- "postgresdb"
postgresdb:
build: ./mydb
image: ppk:postgres9.5
volumes:
- dbdata:/var/lib/postgresql
ports:
- "5432:5432"
networks:
- back-tier
volumes:
dbdata: {}
networks:
front-tier:
back-tier:
Instead of localhost:5432/db.. use postgresdb:5432/db.. connection string.
By default the container has the same hostname as the service name.
Here is my minimal working example, which is connecting a java client (boxfuse/flyway) with postgres server. The most important part is the heath check, which is delaying the start of the myweb container to the time when postgres is ready to accept connections.
Note that this can be directly executed by docker-compose up, it dosen't have any other dependencies. Both the images are from docker hub.
version: '2.1'
services:
myweb:
image: boxfuse/flyway
command: -url=jdbc:postgresql://postgresdb/postgres -user=postgres -password=123 info
depends_on:
postgresdb:
condition: service_healthy
postgresdb:
image: postgres
environment:
- POSTGRES_PASSWORD=123
healthcheck:
test: "pg_isready -q -U postgres"
That is the Docker Networking problem. The solution is to use postgresdb:5432/db in place of localhost:5432/db because the two service is in the same network named back-tier and docker deamon will use name service like a DNS name to make communication between the two container. I think that my solution will help you so.
Related
I am new to Azure cloud services so excuse me if this is a dumb question.
I have a docker-compose file with a .Net core webapi and postgres database. I have it running on Azure as a web-app and its working (I can see when I query the API that there's data in the database). However I would like to get access to the database remotely so that I can inspect and see the data in the database via pgAdmin or something similar.
I did bind a port to my pgAdmin site in my docker-compose but it does not seem like that port is open. I've read somewhere that only port 80 and 443 can be exposed from Azure web-apps when using multi-image containers. (This docker-compose works locally 100% and I can access the pgAdmin site and see the database with all its tables).
So my question is, how do I run my web-api with my postgres database on azure and have visibility to my database?
Docker-compose file:
version: '3.8'
services:
web:
container_name: 'bootcampapi'
image: 'myimage'
build:
context: .
dockerfile: backend.dockerfile
restart: always
ports:
- 8080:80
depends_on:
postgres:
condition: service_healthy
networks:
- bootcampbackend-network
postgres:
container_name: 'postgres'
restart: always
image: 'postgres:latest'
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
environment:
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
- POSTGRES_DB=database-name
- PGDATA=database-data
networks:
- bootcampbackend-network
ports:
- 5432:5432
volumes:
- database-data:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- bootcampbackend-network
volumes:
- database-other:/var/lib/pgadmin/
networks:
bootcampbackend-network:
driver: bridge
As you have found, App Service only listens on one port. One solution around that is to use a reverse proxy like Nginx to route the traffic to both your containers.
BTW, build, depends_on and networks are unsupported. See doc
I am developing a service using docker-compose and I deploy the the containers to a remote host using this commands:
eval $(docker-machine env digitaloceanserver)
docker-compose build && docker-compose stop && docker-compose rm -f && docker-compose up -d
My problem is that I'm changing laptop and I exported the docker-machines to the new laptop and I can activate them.
But when I try to deploy new changes it raises these errors:
Creating postgres ... error Creating redis ...ERROR: for postgres
Cannot create container for service postgres: b'Conflict. The
container name "/postgres" is already in use by container
"612f3887544224aeCreating redis ... errorERROR: for redis Cannot
create container for service redis: b'Conflict. The container name
"/redis" is already in use by container
"01875947f0ce7ba3978238525923e54e0c800fa0a4b419dd2a28cc07c285eb78".
You have to remove (or rename) that container to be able to reuse that
name.'ERROR: for postgres Cannot create container for service
postgres: b'Conflict. The container name "/postgres" is already in use
by container
"612f3887544224ae79f67e29552b4d97e246104b8a057b3a03d39f6546dbbd38".
You have to remove (or rename) that container to be able to reuse that
name.'ERROR: for redis Cannot create container for service redis:
b'Conflict. The container name "/redis" is already in use by container
"01875947f0ce7ba3978238525923e54e0c800fa0a4b419dd2a28cc07c285eb78".
You have to remove (or rename) that container to be able to reuse that
name.' ERROR: Encountered errors while bringing up the project.
My docker-compose.yml is this:
services:
nginx:
build: './docks/nginx/.'
ports:
- '80:80'
- "443:443"
volumes:
- letsencrypt_certs:/etc/nginx/certs
- letsencrypt_www:/var/www/letsencrypt
volumes_from:
- web:ro
depends_on:
- web
letsencrypt:
build: './docks/certbot/.'
command: /bin/true
volumes:
- letsencrypt_certs:/etc/letsencrypt
- letsencrypt_www:/var/www/letsencrypt
web:
build: './sources/.'
image: 'websource'
ports:
- '127.0.0.1:8000:8000'
env_file: '.env'
command: 'gunicorn cuidum.wsgi:application -w 2 -b :8000 --reload --capture-output --enable-stdio-inheritance --log-level=debug --access-logfile=- --log-file=-'
volumes:
- 'cachedata:/cache'
- 'mediadata:/media'
depends_on:
- postgres
- redis
celery_worker:
image: 'websource'
env_file: '.env'
command: 'python -m celery -A cuidum worker -l debug'
volumes_from:
- web
depends_on:
- web
celery_beat:
container_name: 'celery_beat'
image: 'websource'
env_file: '.env'
command: 'python -m celery -A cuidum beat --pidfile= -l debug'
volumes_from:
- web
depends_on:
- web
postgres:
container_name: 'postgres'
image: 'mdillon/postgis'
ports:
- '127.0.0.1:5432:5432'
volumes:
- 'pgdata:/var/lib/postgresql/data/'
redis:
container_name: 'redis'
image: 'redis:3.2.0'
ports:
- '127.0.0.1:6379:6379'
volumes:
- 'redisdata:/data'
volumes:
pgdata:
redisdata:
cachedata:
mediadata:
staticdata:
letsencrypt_certs:
letsencrypt_www:
You’re seeing those errors because you’re explicitly setting container_name:, and those same container names are used elsewhere. Remove those explicit settings. (You don’t need them even for inter-container DNS; Docker Compose automatically creates an alias for you using the name of the service block.)
There are still potential issues from port conflicts. If your other PostgreSQL container is listening on the same (default) host port 5432 then the one you declare in this docker-compose.yml file will conflict with it. You might be able to just not expose your database container ports, or you might need to change the port numbers in this file.
I recently started using WhatsAppBusiness API, i am able to install the docker containers for whatsappbusiness and i am able to access whatsapp web using the port 9090.
Ex: https://172.29.208.1:9090
But I don't know how to access MySQL and WhatsAppCore app.
I tried http://172.29.208.1:33060 but nothing is happened. Please let me know how to access MySQL and wacore.
Here is my docker-compose.yml file
docker-compose.yml
version: '3'
volumes:
whatsappData:
driver: local
whatsappMedia:
driver: local
services:
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_ROOT_PASSWORD: testpass
MYSQL_USER: testuser
MYSQL_PASSWORD: testpass
expose:
- "33060"
ports:
- "33060:3306"
network_mode: bridge
wacore:
image: docker.whatsapp.biz/coreapp:v2.19.4
command: ["/opt/whatsapp/bin/wait_on_mysql.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
volumes:
- whatsappData:/usr/local/waent/data
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
depends_on:
- "db"
network_mode: bridge
links:
- db
waweb:
image: docker.whatsapp.biz/web:v2.19.4
command: ["/opt/whatsapp/bin/wait_on_mysql.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappData:/usr/local/waent/data
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
Mysql is not a HTTP server, it doesn't understand http://172.29.208.1:33060
you could run 'docker ps | grep mysql' to get mysql container id
8dfa30ab0200 mysql:5.7.22 "docker-entrypoint.s…" 6 minutes ago Up 6 minutes 33060/tcp, 0.0.0.0:33060->3306/tcp xxxx_db_1
then run 'docker exec -it 8dfa30ab0200 mysql -h localhost -P 3306 -u testuser --password=testpass' to access mysql
But because you haven't registered, you won't see much stuffs in mysql. Please follow steps in https://developers.facebook.com/docs/whatsapp/api/account to perform registration.
You don't need to access coreapp directly, you perform all API requests through webapp (https://172.29.208.1:9090).
I've done a docker-compose up and been able to run my web service attached to a postgresql image. Problem is, I can't view the data on postico when I try to access the database. The name of the image is db and when i try to specify hostname to be "db" on postico before i connect, i get an error saying hostname not found. I've entered my credentials, port and database name the same way i keyed them in my docker-compose file.
Does anybody know how i can find the correct setup to connect to within the container?
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.phoenix.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- ./my_app:/app
# make sure we start mongodb when we start this service
# links:
# - db
depends_on:
- db
- redis
environment:
GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
FACEBOOK_CLIENT_ID: ${FACEBOOK_CLIENT_ID}
FACEBOOK_CLIENT_SECRET: ${FACEBOOK_CLIENT_SECRET}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
go:
build:
context: .
dockerfile: Dockerfile.go.development
ports:
- 8080:8080
volumes:
- ./genesys-api:/go/src/github.com/sc4224/genesys-api
depends_on:
- db
- redis
- phoenix
db:
container_name: db
image: postgres:latest
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
volumes:
- ./data/db:/data/db
restart: always
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
volumes:
- ./data/redis:/data/redis
entrypoint: redis-server
restart: always
use hostname as localhost.
You can't use the hostname db outside the internal docker network. That would work in the applications running in the same network.
Since you exposed the db to run on port 5432, it's exposed via 0.0.0.0:5432->5432/tcp and therefore is accessible with localhost as host and port 5432
I want to use my app from a docker container with anothers docker container, one for postgres and one for solr.
My docker compose is:
version: '3'
services:
core:
build: ./core
ports:
- "8081:8081"
environment:
- "SPRING_PROFILES_ACTIVE=production"
links:
- postgresdb
- solrdb
postgresdb:
image: postgres:9.4
container_name: postgres
ports:
- "5432:5432"
environment:
- DB_DRIVER=org.postgresql.Driver
- DB_URL=jdbc:postgresql://localhost:5432/db
- DB_USERNAME=db
- DB_PASSWORD=db
networks:
default:
solrdb:
image: solr:5.5
container_name: solr
ports:
- "8983:8983"
environment:
- DB_URL=http://localhost:8984/solr
networks:
default:
networks:
default:
external:
name: mynet
And already I have containers for solr and postgres created, just I want to use it. How I can do it?
You have already exposed the ports for solrdb and the postgresdb. So in your other container access these Dbs by the container names and the exposed port.
For example, solrDb should be accessed via solrdb:8983 and
the postgresdb should be accessed via postgresdb:5432
Edit :
Make sure that both the containers are operating in the same network. You need to add this network field for all the containers.
postgresdb:
image: postgres:9.4
container_name: postgresdb
ports:
- "5432:5432"
environment:
- DB_DRIVER=org.postgresql.Driver
- DB_URL=jdbc:postgresql://localhost:5432/db
- DB_USERNAME=db
- DB_PASSWORD=db
networks:
default:
and in the end of all the docker file
networks:
default:
external:
name: <your-network-name>
And make sure that your predefined network is running prior to the start of these containers.
To create the network :
docker network create --driver overlay --scope global <your-network-name>
Note : Your container ('postgresdb') will be accessible by postgresdb:5432