How to install multiple PG Bouncers in single servers with different pool_mode - postgresql

How can we Install multiple PG bouncer with different pool mode on a single server(Ubuntu18.04)?
when I tried to second time install it says already installed?
Is there any other way to install with a different port?

You could install a container rungime (eg. docker) and run multiple containers, each containing a pgbouncer installation, eg. using this image: https://github.com/edoburu/docker-pgbouncer
First, install docker:
sudo apt install docker.io
Then, start can start as many pgbouncers as you like.
pgbouncer-1:
sudo docker run --rm -d \
-n pgbouncer-session\
-e DATABASE_URL="postgres://user:pass#postgres-host/database" \
-e POOL_MODE=session \
-p 5432:5432
edoburu/pgbouncer
pgbouncer-2
sudo docker run --rm -d \
-n pgbouncer-transaction \
-e DATABASE_URL="postgres://user:pass#postgres-host/database" \
-e POOL_MODE=transaction \
-p 5433:5432
edoburu/pgbouncer
Note that the the containers use different ports on the host (first one uses 5432, second one uses 5433).
If you have lots of configuration, you might want to use a bind-mount for the configuration files.
Also, for a steady setup, I would recommend using docker-compose instead of raw docker commands.

Related

Cannot connect to volume in Docker (Windows)

I am trying to run Postgresql in Docker using this code in a terminal:
`winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13`
and I keep running into this error: Error response from daemon: The system cannot find the file specified.
I have looked up this error and the solutions I see online (such as restarting Docker Desktop, reinstalling Docker, updating Docker) did not work for me.
I think the issue is with the volume part (designated by -v) because when I remove it, it works just fine. However, I want to be able to store the contents in a volume permanently, so running it without the -v is not a long-term solution.
Has anyone run into a similar issue before?
Check if you can access to this path in host.
dir C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data
check if you can access on volume inside a container.
winpty docker run -v C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data:/data alpine ls /data

How to install chrony on redhat 8 minimal

Im using keycloak docker image and need to synchronize time with chrony. however I cannot install chrony, because its not in repository i assume.
I use image from https://hub.docker.com/r/jboss/keycloak
ist based on registry.access.redhat.com/ubi8-minimal
Steps to reproduce:
~$ docker run -d --rm -p 8080:8080 --name keycloak jboss/keycloak
~$ docker exec -it -u root keycloak bash
root#707c136d9c8a /]# microdnf install chrony
error: No package matches 'chrony'
I'm not able to find working repo which provides chrony for redhat 8 minimal
Apparently i need synchronize time on host system, nothing to do with container itself.. Silly me, i need a break..

What is the best way to install tensorflow and mongodb in docker?

I want to create a docker container or image and have tensorflow and mongodb installed, I have seen that there are docker images for each application, but I need them to be working together, from a mongodb database I must extract the data to feed a model created in tensorflow.
Then I want to know if it is possible to have a configuration like that, since I have tried with a ubuntu container and inside it to install the applications I need, but I don't know if there is another way to do it.
Thanks.
Interesting that I find this post, and just found one solution for myself. Maybe not the one for you, BTW.
What I did is: docker pull mongo and run as daemon:
#!/bin/bash
export VOLUME='/home/user/code'
docker run -itd \
--name mongodb \
--publish 27017:27017 \
--volume ${VOLUME}:/code \
mongo
Here
the 'd' in '-itd' means running as daemon (like service, not
interactive).
The --volume may not be used.
Then docker pull tensorflow/tensorflow and run it with:
#!/bin/bash
export VOLUME='/home/user/code'
docker run \
-u 1000:1000 \
-it --rm \
--name tensorflow \
--volume ${VOLUME}:/code \
-w /code \
-e HOME=/code/tf_mongodb \
tensorflow/tensorflow bash
Here
the -u make docker bash with same ownership as host machine;
the --volume make host folder /home/user/code mapping to /code in docker;
the -w work make docker bash start from /code, which is /home/user/code in host;
the -e HOME= option sign bash $HOME folder such that later you can pip install.
Now you have bash prompt such that you can
create virtual env folder under /code (which is mapping to /home/user/code),
activate venv,
pip install pymongo,
then you can connect to mongodb you run in docker (localhost may not work, please use host IP address).

PostgreSQL docker container not writing data to disk

I am having some difficulty with docker and the postgres image from the Docker Hub. I am developing an app and using the postgres docker to store my development data. I am using the following command to start my container:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser -e PGDATA="/pgdata" --mount source=mydata,target=/home/myuser/pgdata -p 5432:5432/tcp postgres
When I finish working on my app, I usually have to run "docker container prune", in order to free up the container name and be able to run it again later. This worked until recently, when I upgraded my postgres image to run version 11 of PostgreSQL. Now, when I start my container and create data in it, the next time I use it the data is gone. I've been reading about volumes in the docker documentation cannot find anything that can tell my why this is not working. Can anyone please shed some light on this?
Specify a volume mount with -v $PGDATA_HOST:/var/lib/postgresql/data.
The default PGDATA inside the container is /var/lib/postgresql/data so there is no need to change that if you're not modifying the Docker image.
e.g. to mount the data directory on the host at /srv/pgdata/:
$ PGDATA_HOST=/srv/pgdata/
$ docker run -d -p 5432:5432 --name=some-postgres \
-e POSTGRES_PASSWORD=secret \
-v $PGDATA_HOST:/var/lib/postgresql/data \
postgres
The \ are only needed if you break the command over multiple lines, which I did here for the sake of clarity.
since you specified -e PGDATA="/pgdata", the database data will be written to /pgdata within the container. If you want the files in /pgdata to survive container deletion, that location must be a docker volume. To make that location a docker volume, use --mount source=mydata,target=/pgdata.
In the end, it would be simpler to just run:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser --mount source=mydata,target=/var/lib/postgresql/data -p 5432:5432/tcp postgres

I can't access mounted volume of docker-postgres from host

I create my container like this:
docker run --name postgresql -itd --restart always \
--publish 5432:5432 \
--volume /my/local/host:/var/lib/postgresql \
sameersbn/postgresql:9.4-11
but, when I do ls on the root directory, I see something like this:
drwxrwxr-x 3 messagebus messagebus 4,0K Ιαν 10 00:44 host/
or, in other words, I cannot access the /my/local/host directory. I have no idea about the messagebus user. is that normal? if this is the case, then how could I move the database from one machine to another in the future?
Try using a data container to hold your DB data. The pattern is described in the docs and is designed to promote clean separation between run-time and data.
$ docker create -v /var/lib/postgresql --name dbdata sameersbn/postgresql:9.4-11
$ docker run --name postgresql1 -itd --restart always \
--publish 5432:5432 \
--volumes-from dbdata
sameersbn/postgresql:9.4-11
A separate data container makes backup and recovery simpler and more obvious
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/postgresql
The following posting I think gives a good explanation of data containers:
https://medium.com/#ramangupta/why-docker-data-containers-are-good-589b3c6c749e#.zarkr5sxc