Running dockerized pgadmin as a service - postgresql

It works ok when running this way:
docker run -it --rm -p 5050:5050 --name testing fenglc/pgadmin4
But when I add it to a docker-compose then I'm unable to connect to localhost:5050. Same happens to thajeztah/pgadmin4 image
version: "3"
services:
pgadmin:
image: fenglc/pgadmin4
ports:
- "5050:5050"
Isn't it the same thing?

Okie I missed the -it flag in your docker run statement. You need to have below compose
version: "3"
services:
pgadmin:
image: fenglc/pgadmin4
stdin_open: true
tty: true
ports:
- "5050:5050"

Related

Cloud sql proxy through docker compose doesnt work untill dependent service specified

docker-compose.yaml
version: '3.9'
services:
cloudsql-proxy: # doesnt work when ran by itself
container_name: cloudsql-proxy1
image: gcr.io/cloudsql-docker/gce-proxy:1.31.0
# tty: true
command: >
/cloud_sql_proxy --dir=/cloudsql
-instances=xyz=tcp:0.0.0.0:3306
-credential_file=/secrets/cloudsql/credentials.json
ports:
- 127.0.0.1:3306:3306
volumes:
- ../developer-sv-account-key.json:/secrets/cloudsql/credentials.json
restart: always
local-dev-db:
image: library/postgres:13-alpine
container_name: local-dev-db
depends_on:
cloudsql-proxy:
condition: service_started
When I try to connect to cloudsql-proxy db from local client
it works only when I run both services together - docker compose up
If I try docker compose run cloudsql-proxy I get connection error
cloudsql-proxy command when ran independently in terminal (instead of docker compose) works successfully.

Pgadmin does not start with docker-compose-file

I started working with postgres and discovered pgadmin.
I started like this:
Postgres:
$ docker run --name admin -e POSTGRES_PASSWORD=admin -p 5432:5432 -d postgres:latest
Pgadmin:
docker run -p 5050:80 -e "PGADMIN_DEFAULT_EMAIL=admin#admin.com" -e "PGADMIN_DEFAULT_PASSWORD=root" -d dpage/pgadmin4
So this worked perfectly fine, I started the postgres container and afterwarda the pgadmin cointainer and got on the site http://localhost:80/login and could login
The problem now is the docker-compose.yml that I wrote. As I deploy my docker-compose-file, the containers are both running but I can't access the login page of pgadmin
Docker-compose.yml:
version: '3.8'
services:
db:
image: postgres:latest
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: root
ports:
- "5050:80"
When I run it, it works as it should. So I think your issue is the URL you try to access it on. You need to access it on the mapped port 5050. Not 80, as you've written. So the URL is http://localhost:5050/login.

psycopg2.OperationalError: FATAL: the database system is starting up, Docker + Odoo

i have this docker-compose file:
version: '3.7'
services:
db:
image: "postgres:9.6"
container_name: postgres-container
ports: ["6543:5432"]
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=odoo
restart: always
volumes:
- ./data/postgres:/var/lib/postgresql/data
odoo:
#build: ./odoo-container
image: odoo-image
container_name: odoo-container
ports: ["8069:8069"]
tty: true
command: opt/odoo/odoo-bin -c opt/odoo.conf -d teste
depends_on:
- db
the problem is that when i start docker compose, my db service runs and when docker runs the odoo service i get an error:
psycopg2.OperationalError: FATAL: the database system is starting up
and when i restart the odoo container, its works
im added the restart method, and works:
odoo:
#build: ./odoo-container
image: odoo-image
container_name: odoo-container
ports: ["8069:8069"]
command: opt/odoo/odoo-bin -c opt/odoo.conf -d teste
depends_on:
- db
restart: always

docker-compose down doesn't.. "down" the containers

I have a few systems where I use docker-compose and there is no problem.
However, I have one here where 'down' doesn't do anything at all.
'up' works perfectly though. This is on MacOS.
The project is nicknamed 'stormy', and here is the script:
version: '3.3'
services:
rabbitmq:
container_name: stormy_rabbitmq
image: rabbitmq:management-alpine
restart: unless-stopped
ports:
- 5672:5672
- 15672:15672
expose:
- 5672
volumes:
#- /appdata/stormy/rabbitmq/etc/:/etc/rabbitmq/
- /appdata/stormy/rabbitmq/data/:/var/lib/rabbitmq/
- /appdata/stormy/rabbitmq/logs/:/var/log/rabbitmq/
networks:
- default
settings:
container_name: stormy_settings
image: registry.gitlab.com/robinhoodcrypto/stormy/settings:latest
restart: unless-stopped
volumes:
- /appdata/stormy/settings:/appdata/stormy/settings
external_links:
- stormy_rabbitmq:rabbitmq
networks:
- default
capture:
container_name: stormy_capture
image: registry.gitlab.com/robinhoodcrypto/stormy/capture:latest
restart: unless-stopped
volumes:
- /appdata/stormy/capture:/appdata/stormy/capture
external_links:
- stormy_rabbitmq:rabbitmq
networks:
- default
livestream:
container_name: stormy_livestream
image: registry.gitlab.com/robinhoodcrypto/stormy/livestream:latest
restart: unless-stopped
volumes:
- /appdata/stormy/capture:/appdata/stormy/livestream
external_links:
- stormy_rabbitmq:rabbitmq
networks:
- default
networks:
default:
external:
name: stormy-network
the 'up' script is as follows:
[ ! "$(docker network ls | grep stormy-network)" ] && docker network create stormy-network
echo '*****' | docker login registry.gitlab.com -u 'gitlab+deploy-token-******' --password-stdin
docker-compose down
docker-compose build --pull
docker-compose -p 'stormy' up -d
and the 'down' is simply:
docker-compose down
version:
$ docker-compose -v
docker-compose version 1.24.1, build 4667896b
when I do 'down', here is the output:
$ docker-compose down
Network stormy-network is external, skipping
and I put a verbose log output at: https://pastebin.com/Qnw5J88V
Why isn't 'down' working?
The docker-compose -p option sets the project name which gets included in things like container names and labels; Compose uses it to know which containers belong to which Compose services. You need to specify it on all of the commands that interact with containers (docker-compose up, down, ps, ...); if you're doing this frequently, setting the COMPOSE_PROJECT_NAME environment variable might be easier.
#!/bin/sh
export COMPOSE_PROJECT_NAME=stormy
docker-compose build --pull
docker-compose down
docker-compose up -d

Docker postgress User "postgres" has no password assigned

I keep getting
User "postgres" has no password assigned.
updated
.env
POSTGRES_PORT=5432
POSTGRES_DB=demo_db2
POSTGRES_USER=postgres
POSTGRES_PASSWORD=password
Even though the postgres password is set.
I'm trying to use the same variables from the following command
docker run --name demo4 -e POSTGRES_PASSWORD=password -d postgres
Could this be an issue with volumes ? im very confused.
I ran this command as well
docker run -it --rm --name demo4 -p 5432:5432 -e POSTGRES_PASSWORD=password -e POSTGRES_USER=postgress postgres:9.4
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: .
depends_on:
- database
ports:
- 8000:8000
environment:
- POSTGRES_HOST=database
database:
image: postgres:9.6.8-alpine
volumes:
- pgdata:/var/lib/postgresql/pgdata
ports:
- 8002:5432
react_client:
build:
context: ./client
dockerfile: Dockerfile
image: react_client
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 8001:8001
env_file:
- ./client/.env
volumes:
pgdata:
You are missing the inclusion of the .env file...
Docker composer:
database:
environment:
- ENV_VAR=VALUE
or
database:
env_file:
- .env
Plain Docker:
docker run options --env ENV_VAR=VALUE ...
or
docker run options --env-file .env ...`