Connect PostgreSQL to rabbitMQ - postgresql

I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.

In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.

Related

Can't connect to external DB inside a Docker container - timeouts

All of a sudden I get connection timeouts to our external Postgres server and I am tearing my hairs out of my head to understand why:
hostname = db.ourcompany.com
port = 5432
I can connect via my DB management tool on my desktop, so the DB is working. None of my colleagues have the same issue.
I have upgraded Docker to the latest version.I have removed all containers and images and rebuilt them. I run a Mac M1 on OS 12.6.1. I have not had any recent updates. I can install dependencies within my container which go via Composer or NPM, so external connections go well.
What am I missing? It worked for months, nothing has changed. The docker-compose files are exactly the same as before.

Concourse CI: Quickstart + localhost

I've just started 'kicking the tires' on Concourse-CI, using the quickstart tutorial as my starting point. That much works fine.
I've created a super basic pipeline with a single task, just like the quickstart tutorial. But instead of pulling the busybox image and executing the echo command, I'm pulling another image, and running a command that would try to update a local postgres db.
When I run the pipeline - my task (docker image writing to local postgres db) fails - because connection can't be made to the local db. I've searched far and wide - and can't seem to figure out how to do this. In the docker-compose from the quickstart tutorial, I've tried adding CONCOURSE_CONTAINERD_ALLOW_HOST_ACCESS: "true" to no avail
Any suggestions on how I may be able to achieve this?
Turns out my issue had nothing to do with Concourse.
The local postgres instance I was attempting to write to was only accepting connections from localhost - which won't allow connections from Docker containers. I updated postgres setting to allow remote connections - and all is well.

How to see/install pg_activity for the crunchy data postgres operator?

I have setup an Rancher (RKE) (kuberbetes) for my application.
and application using the postgres so i have setup Crunchydata postgres operator and create postgres cluster using that.
everything fine but now i want to see the pg_activity for my postgresql.
how i can see the activity of whole postgres ?
you use the monitoring tools in rancher to monitor the Postgres.
apart from that you can SSH inside the respective pod of the database and use the cli command and check the output.
In rancher, you can also use the client tool to connect with the rancher and run the cli command to check the pg_activity.
Client docker image : https://hub.docker.com/r/jbergknoff/postgresql-client/
you can also deploy the GUI docker client on rancher and use it
GUI postgress client : https://hub.docker.com/r/dpage/pgadmin4/
GUI Example : https://dataedo.com/kb/query/postgresql/list-database-sessions#:~:text=Using%20pgAdmin,all%20connected%20sessions%20(3).

Kong Enterprise on Postgres Master/Slave architecture

I'm installing Kong Enterprise API Management (1.5) and am utilising the Postgress database option. My setup is that I have 2 RedHat servers on premise.
Both have the Docker environment installed.
Both have Kong Enterprise, and initially, both instances of Kong talk to their respective, local container.
Nominating one node as the Master, (and the other as the Slave), I successfully setup postgres replication so that changes made to the master database tables are replicated to the slave postgres database (I've proven this works).
I now want to re-run the Kong container on my second node and this time, nominate for the KONG_PG_HOST environment variable (in my docker-compose file) to reference the 1st node. The intention being that, irrespective of which Kong Node is processing the request, the live master database is only, on the nominated master.
Starting a shell script into the Kong container, I can ping the first node ok.
Still - if I specifically go to :8002/overview it seems that Kong has no route to any database content, as the landing pages of the admin portal says no workspaces exists and 'vitals are disabled'.
Pointing browser at 8002/overview works fine.
What else do I need to make sure exists, for a Kong container one node to to use the postgres database on node 2 ? Postgres port 5432 is open on node 1 as I can connect remotely to is and postgres replication between both nodes works..
What have I missed ?
thanks -

How can I conect a NodeJS app to a MongoDB running in a Docker container on AWS?

I am attempting to deploy my first MEAN stack application ('weatherapp') to production on AWS.
I deployed my NodeJS/Express/Angular app to AWS Elastic Beanstalk (preconfigured Linux machine running Node). This works fine and I can view the app in the browser.
Separately I created a docker container running MongoDB and deployed it to AWS / EC2 following the steps in this post:
https://blog.codeship.com/running-mean-web-application-docker-containers-aws/
My question is - how do I connect the two?
In my NodeJS app I was connecting to my local Mongo instance locally like this:
'mongodb://localhost:27017/weatherapp'
What steps can I take to find out what the connection string should be for my production Mongo instance on docker?
Thanks in advance!
The answer to this is two-fold. We need to set some options on the Docker side in the EC2 instance and then some security groups and configuration on the AWS side. First, we'll start on on the Docker container side.
Container
When you run the MongoDB container, you will want to do two things:
Persist the data to disk.
Open the MongoDB port to the container.
To persist the data to disk you will want to do something like -v /data/db:/data/db. This will make the MongoDB data available at /data/db on the host. This makes sure that an accidental deletion or upgrade of the container doesn't lose any data.
Next, we need to publish the MongoDB port so that applications external of Docker can connect to it. The default MongoDB port is 27017 so let's publish that using -p 27017:27017.
If your original command for starting MongoDB was:
docker run --name mymongodb -d mongo
Then the new one would be:
docker run --name mymongodb -d -p 27017:27017 -v /data/db:/data/db
AWS
Now, we need to edit the security group of your EC2 instance and configuration of Elastic Beanstalk.
Security Groups
First, take a look at your Security Groups in the EC2 console. You will have a group for the Elastic Beanstalk application named similar to awseb-e-xanf9hqrw3-stack-AWSEBSecurityGroup-1N2T1AI2H05I8 with a ID similar to sg-07fb8c43. We'll use this ID in the next step so copy it somewhere.
Now find the Security Group attached to your EC2 instance running the Docker container. You will need to add a new rule to this group allowing access to the MongoDB container. Edit the group and add a new inbound rule for:
Type: Custom TCP
Protocol TCP
Port range: 27017
Source: sg-07fb8c43
This will allow the Elastic Beanstalk EC2 instances (using sg-07fb8c43) to access the MongoDB port on your Docker EC2 instance.
Elastic IP
You'll likely want a more static IP address for your EC2 instance in case it reboots. Navigate to the Elastic IPs section of the EC2 console and allocate a new address to your Docker EC2 instance.
The new Elastic IP will be the address you use in your Elastic Beanstalk configuration to connect to MongoDB. If your address was 54.67.29.50 then your application would connect to mongodb://54.67.29.50:27017.
Elastic Beanstalk
Now, instead of hardcoded this address in your Node.js application, you should configure your application to pull the information from an environment variable. In your application, you should read the MongoDB URL from something like process.env.MONGO_URI. Then, in your Elastic Beanstalk application configuration, navigate to the Software Configuration and then down to Environment Properties. Here, you create a property name of MONGO_URI and the value as mongodb://54.67.29.50:27017. This will allow you to easily change the MongoDB instance should it ever change or if you launch multiple environments with different databases.