Cannot add postgres container into pgadmin4 container - postgresql

I'm fairly new to docker and I want to set up a PostgreSQL database and manage it with pgadmin4. Both in docker.
Unfortunately, I cannot add the PostgreSQL database to pgadmin4
I created both containers via portainer and chose the latest docker hub images.
When creating both containers, I chose "bridge" as the network type.
Both containers in portainer
As you might see in this picture, they share the same network and are not isolated from each other.
Are there any additional steps to do?

Related

Connect PostgreSQL to rabbitMQ

I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.
In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.

Kong Enterprise on Postgres Master/Slave architecture

I'm installing Kong Enterprise API Management (1.5) and am utilising the Postgress database option. My setup is that I have 2 RedHat servers on premise.
Both have the Docker environment installed.
Both have Kong Enterprise, and initially, both instances of Kong talk to their respective, local container.
Nominating one node as the Master, (and the other as the Slave), I successfully setup postgres replication so that changes made to the master database tables are replicated to the slave postgres database (I've proven this works).
I now want to re-run the Kong container on my second node and this time, nominate for the KONG_PG_HOST environment variable (in my docker-compose file) to reference the 1st node. The intention being that, irrespective of which Kong Node is processing the request, the live master database is only, on the nominated master.
Starting a shell script into the Kong container, I can ping the first node ok.
Still - if I specifically go to :8002/overview it seems that Kong has no route to any database content, as the landing pages of the admin portal says no workspaces exists and 'vitals are disabled'.
Pointing browser at 8002/overview works fine.
What else do I need to make sure exists, for a Kong container one node to to use the postgres database on node 2 ? Postgres port 5432 is open on node 1 as I can connect remotely to is and postgres replication between both nodes works..
What have I missed ?
thanks -

How to add an already build docker container to docker-compose?

I have a container called "postgres", build with plain docker command, that has a configured PostgreSQL inside it. Also, I have a docker-compose setup with two services - "api" and "nginx".
How to add the "postgres" container to my existing docker-compose setup as a service, without rebuilding? The PostgreSQL database is configured manually, and filled with data, so rebuilding is a really, really bad option.
I went through the docker-compose documentation, but found no way to do this without a re-build, sadly.
Unfortunately this is not possible.
You don't refer containers on docker-compose, you use images.
You need to create a volume and/or bind mount it to keep your database data.
This is because containers do not save data, if you have filled it with data and did not make a bind mount or a volume to it, you will lose everything on using docker container stop.
Recommendation:
docker cp
Docker cp will copy the contents from container to host. https://docs.docker.com/engine/reference/commandline/container_cp/
Create a folder to save all your PostgreSQL data (ex: /home/user/postgre_data/)
Save the contents of your PostgreSQL container data to this folder (docker hub postgres page for further reference: ;
Run a new PostgreSQL (same version) container with a bind mount poiting to the new folder;
This will maintain all your data and you will be able to volume or bind mount it to use on docker-compose.
Reference of docker-compose volumes: https://docs.docker.com/compose/compose-file/#volumes
Reference of postgres docker image: https://hub.docker.com/_/postgres/
Reference of volumes and bind mounts: https://docs.docker.com/storage/bind-mounts/#choosing-the--v-or---mount-flag
You can save this container in a new image using docker container commit and use that newly created image in your docker-compose
docker container commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
I however prefer creating images with the use of Dockerfiles and scripts to fill my data etc.

Docker best practice to access host's services

What is best practice to access the host's services within a docker container?
I'd like to access PostgreSQL running on the host within my application which runs in a docker container.
The easiest approach I've found is to use docker container run --net="host" which, based on this answer, behaves as follows:
Such a container will share the network stack with the docker host and from the container point of view, localhost (or 127.0.0.1) will refer to the docker host.
Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the -p or -P docker run option.
Which does not seem to be best practice since the containers should be isolated from the host.
Other approaches I've found are awking the hosts IP. May this be the way to go?
The best option in this case to treat the host as a remote machine. That way the container will be portable and would not have a strict dependency on network locations when connecting to the database.
In addition to what is mentioned on the drawbacks of using --network=host, this option will tightly couple the container to the host by assuming that the database is found on localhost.
The way to treat the machine as a remote one, is to use standard network constructs such as IP and DNS. Define a new DNS entry for the container that will point to the host where the DB is found using the
--add-host option to docker run.
docker run --add-host db-static:<ip-address-of-host> ...
Then inside the container you connect to the database via db-static

The right way to move a data-only docker container from one machine to another

I have a database docker container that is writing its data to another data-only container. The data-only container has a volume where it stores the data of the database. Is there a "docker" way of migrating this data-only container from one machine to another? I read about docker save and docker load but these commands save and load images, not containers. I want to be able to package the docker container along with its volumes and move it to another machine.
Checkout the flocker project. Very interesting solution to this problem, using ZFS to snapshot and replicate the storage volume between hosts.