run database migration through a k8s job - kubernetes

I want to run database migration via a job in k8s. The database is located in a mariadb pod in the same cluster, installed through bitnami/mariadb helm chart. So I think the mariadb can be seen/exposed as a service.
How can I let migration pod connect to the mariadb pod? It seems I cannot connect to the database well in the migration pod, even if I have configured the mariadb host in the migration pod image. When I run alembic upgrade head, it just hangs there, no any output.
Thanks!

Related

Connect PostgreSQL to rabbitMQ

I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.
In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.

AWS Fargate EKS Sonarqube Pod with postgre RDS connection is not working

I have created EKS sonarqube fargate pod which is running but it is using its own database.
We have postgre RDS database. I have updated my sonarqube with postgre RDS environment details, but my POD is not working.
POD is showing error related to vm.max_map_count. (This error seems to be generic, I tried applying this command on host machine as well as inside pod both options are not working e.g vm.max_map_count=262144
)
Please let me know if anyone knows about this solution.
Thank you!.

How to see/install pg_activity for the crunchy data postgres operator?

I have setup an Rancher (RKE) (kuberbetes) for my application.
and application using the postgres so i have setup Crunchydata postgres operator and create postgres cluster using that.
everything fine but now i want to see the pg_activity for my postgresql.
how i can see the activity of whole postgres ?
you use the monitoring tools in rancher to monitor the Postgres.
apart from that you can SSH inside the respective pod of the database and use the cli command and check the output.
In rancher, you can also use the client tool to connect with the rancher and run the cli command to check the pg_activity.
Client docker image : https://hub.docker.com/r/jbergknoff/postgresql-client/
you can also deploy the GUI docker client on rancher and use it
GUI postgress client : https://hub.docker.com/r/dpage/pgadmin4/
GUI Example : https://dataedo.com/kb/query/postgresql/list-database-sessions#:~:text=Using%20pgAdmin,all%20connected%20sessions%20(3).

Kong Enterprise on Postgres Master/Slave architecture

I'm installing Kong Enterprise API Management (1.5) and am utilising the Postgress database option. My setup is that I have 2 RedHat servers on premise.
Both have the Docker environment installed.
Both have Kong Enterprise, and initially, both instances of Kong talk to their respective, local container.
Nominating one node as the Master, (and the other as the Slave), I successfully setup postgres replication so that changes made to the master database tables are replicated to the slave postgres database (I've proven this works).
I now want to re-run the Kong container on my second node and this time, nominate for the KONG_PG_HOST environment variable (in my docker-compose file) to reference the 1st node. The intention being that, irrespective of which Kong Node is processing the request, the live master database is only, on the nominated master.
Starting a shell script into the Kong container, I can ping the first node ok.
Still - if I specifically go to :8002/overview it seems that Kong has no route to any database content, as the landing pages of the admin portal says no workspaces exists and 'vitals are disabled'.
Pointing browser at 8002/overview works fine.
What else do I need to make sure exists, for a Kong container one node to to use the postgres database on node 2 ? Postgres port 5432 is open on node 1 as I can connect remotely to is and postgres replication between both nodes works..
What have I missed ?
thanks -

Deploying sentry on AWS using fargate

I am trying to deploy sentry on aws-fargate using terraform. I am also launching Redis cluster and Postgres DB in RDS. I can launch the stack but as this is a new database i need to upgrade using exec command.(example command)
docker-compose exec sentry sentry upgrade
and then restart the sentry service to proceed. How can I achieve this using fargate.