How do I connect to the postgres from my home OS using a GUI client like postico? - postgresql

I am running the Django Cookiecutter on Docker and tried to connect to the postgres database using Postico, a GUI client on my laptop.
The credentials I used was basically the same as .envs/.local/.postgres Yet I still cannot connect.
I wonder what's the issue that's blocking me from doing so?

The issue is that I wasn't aware that Docker allows you to state which ports get mapped to the host OS.
See https://docs.docker.com/compose/compose-file/#network_mode for details.
Specifically, I needed to go to local.yml under postgres definition and add:
ports:
- "5432:5432"
Then restart the docker.
In case you want to map different port number note that the syntax is
the HOST:CONTAINER format

Related

Using Keycloak with Postgres socket

I am trying to containerize my keycloak application and I am trying to make it so that the keycloak instance connects to the psotgres socket instead of its hostname. But the keycloak instance crashes almost instantly.
Is it not possible to make keycloak connect to postgres socket? or am I using the wrong connection params?
POSTGRES_ENV:
POSTGRES_DB=keycloak
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
KEYCLOAK_ENV:
DB_VENDOR=POSTGRES
DB_ADDR=/var/run/postgresql/.s.PGSQL.5432
DB_DATABASE=keycloak
DB_USER=postgres
DB_SCHEMA=public
DB_PASSWORD=postgres # change this in prod
I have tried changing my docker mount from bind into volumes. Tried changing the file permissions, and even tried different keycloak versions. It is always the same class of error.

Connect PostgreSQL to rabbitMQ

I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.
In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.

Connect a Mongodb Docker Container Running on Server to Robo 3t on my Computer

I'm trying to deploy my first web app and I decided that it would be a good training to be able to connect my empty Mongo DB (inside its own docker container) itself running in a Digital Ocean server to Robo3t on my computer.
I could find several guides explaining how to either do it for a DB running directly on the server or inside a container but not on a remote server.
To be honest I'm a bit lost right now because I'm still completely new to these things. So I don't even what's the strategy I need to use...
Your help will be greatly appreciated, many thanks in advance!
You have to bind ports when you build the image with -p 80:80 for example. Change 80 with your port. Next step is open port in the remote machine. So then you can connect Roboto 3T with your db.

Setup a PostgreSQL connection to an already existing project in Docker

I had never used PostgreSQL nor Docker before. I set up an already developed project that uses these two technologies in order to modify it.
To get the project running on my Linux (Pop!_OS 20.04) machine I was given these instructions (sorry if this is irrelevant but I don't know what is important and what is not to state my problem):
Installed Docker CE and Docker Compose.
Cloned the project with git and ran the commands git submodule init and git submodule update.
Initialized the container with: docker-compose up -d
Generated the application configuration file: ./init.sh
After all of that the app was available at http://localhost:8080/app/ and I got inside the project's directory the following subdirectories:
And inside dbdata:
Now I need to modify the DB and there's where the difficulty arose since I don't know how to set up the connection with PostgreSQL inside Docker.
In a project without Docker which uses MySQL I would
Create the local project's database "dbname".
Import the project's DB: mysql -u username -ppassword dbname < /path/to/dbdata.sql
Connect a DB client (DBeaver in my case) to the local DB and perform the necessary modifications.
In an endeavour to do something like that with PostgeSQL, I have read that I need to
Install and configure Ubuntu 20.04 serve.
Install PostgreSQL.
Configure Postgres “roles” to handle authentication and authorization.
Create a new Database.
And then what?
How can I set up the connection in order to be able to modify the DB from DBeaver and see the changes reflected on http://localhost:8080/app/ when Docker is involved?
Do I really need an Ubuntu server?
Do I need other program than psql to connect to Postgres from the command line?
I have found many articles related to the local setup of PostgreSQL with Docker but all of them address the topic from scratch, none of them talk about how to connect to the DB of an "old" project inside Docker. I hope someone here can give directions for a newbie on what to do or recommend an article explaining from scratch how to configure PostgreSQL and then connecting to a DB in Docker. Thanks in advance.
Edit:
Here's the output of docker ps
You have 2 options to get into known waters pretty fast:
Publish the postgres port on the docker host machine, install any postgres client you like on the host and connect to the database hosted in the container as you would have done this traditionally. You will use localhost:5433 to reach the DB. << Update: 5433 is the port where the postgres container is published on you host, according to the screenshot.
Another option is to add another service in your docker-compose file to host the client itself in a container.
Here's a minimal example in which I am launching two containers: the postgres and an adminer that is exposed on the host machine on port 9999.
version: '3'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 9999:8080
then I can access the adminer at localhost:9999 (password is example):
Once I'm connected to my postgres through adminer, I can import and execute any SQL query I need:
A kind advice is to have a thorough lecture to understand how the data is persisted in a Docker context. Performance and security are also topics that you might want to add under your belt as a novice in the field better sooner than later.
If you're running your PostgreSQL container inside your own machine you don't need anything else to connect using a database client. That's because to the host machine, all the containers are accessible using their own subnet.
That means that if you do this:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 341164c5050f`
it will output a list of IPs that you can configure in your DBeaver to access the container instance directly.
If you're not fond of doing that (or you prefer to use cli) you can always use the psql inside the installation of the PostgreSQL container to achieve something like you described in mysql point nº2:
docker exec -i 341164c5050f bash -c 'psql -U $POSTGRES_USER' < /path/to/your/schema.sql
It's important to inform the -i, otherwise it'll not read the schema from the stdin. If you're looking for psql in the interactive mode, use -it instead.
Last but not least, you can always edit the docker-compose.yml file to export the port and connect to the instance using the public IP/loopback device.

"Server Selection Timeout Error" MongoDB Go Driver with Docker

I'm working on a very basic (I thought) starter program in Go using MongoDB and Docker. Trying to get a handle on these before we start using them at work.
I've got my MongoDB running in a docker container, just using my local host, using the official Docker image. This is running fine, I can connect to it through MongoDB Compass and modify the DB.
My next task was to build a separate Docker container that is able to read and write to the DB. I'm using MongoDB-Go-Driver (https://godoc.org/github.com/mongodb/mongo-go-driver/mongo) for this as mgo is no longer kept up.
This is my code, I'm just following the numerous tutorials online to make a simple connection and then ping the DB to ensure connectivity.
client, err := mongo.Connect("mongodb://localhost:27017")
if err != nil {
log.Fatal("error ", err)
}
// Check the connection
err = client.Ping(context.TODO(), nil)
if err != nil {
log.Fatal("error2 ", err)
}
fmt.Println("Connected to MongoDB!")
It always fails on doing any operation on the DB (Find, FindOne, Ping, etc.) with error2 server selection timeout
This is my docker-compose file I'm running.
version: "3"
services:
datastore:
image: mongo
ports:
- "27017:27017"
networks:
- maccaptionNet
volumes:
- .:/go/src/maccaption_microservice/dbdata
jobservice:
image: jobservicemaccaption:1.0
networks:
- maccaptionNet
depends_on:
- "datastore"
networks:
maccaptionNet:
driver: bridge
I'm brand new to MongoDB and after hours of research haven't made any progress on this.
I've read through https://docs.mongodb.com/manual/core/read-preference-mechanics/
https://docs.mongodb.com/manual/replication/
Can anyone point me in the right direction for this? I haven't been able to find a lot on this specific issue.
Thanks!
When you running the service and mongodb in docker you can't use localhost since the service is in a different container than mongodb, and from docker point of view it's under a different ip address.
You can connect with the service name you specify in docker-compose datastore
mongo.Connect("mongodb://datastore:27017")
Edit:
from: https://docs.docker.com/compose/networking/
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name
Meaning that if you run multiple containers via compose, you can access one container from the other by the container name,
Basically when docker-compose starts, it sets up the network, and each container in the compose joins the network under its container name. For a container's point if view, localhost is just the container itself, while he can search for other container's name and get back the container’s IP address.
Assuming that the docker is running on your localhost, you can set the name in etc/hosts file like this:
127.0.0.1 datastore
(if not just replace 127.0.0.1 with the docker ip)
And in the app you will connect with mongodb://datastore:27017
So you will be able to run the service both in the docker and from outside, if you'll decide to run only the db in docker
docker-compose start datastore
If you are connecting to one docker from another (like it is written in your docker-compose file, and using bridge network mode, you have to change your localhost to the hostname, like datastore
client, err := mongo.Connect("mongodb://datastore:27017")
When your go script uses localhost, it expects the database to located in the same docker
I think my answer might be unrelated but still, I was getting the same error and it was because my IP address was not listed in the IP whitelist tab in MongoDB atlas, so make sure you have your IP address there before trying to connect.
I had the same problem but found another way to address this issue. You can just pass network parameter while running docker image and this way docker points to correct localhost.
docker run --network="host" ....
Source for this solution
Somehow i've fix this problem in a different way: by changing ports from "27018:27017" to "27017:27017".
IDK why this helps. Maybe if Mongo sees not default port it thinks there are cluster of Mongo's nodes.
I got this problem when I tired to connect to
mongodb v4.0.10
with
pymongo==4.0.2 not worked
pymongo==3.12.3 worked
Check your packages
mongodb v5.0.2 works with pymongo==4.0.2