Install Chef Server 11 with AWS RDS - postgresql

Now AWS has postgresql service in RDS, So I tried to install Chef Server 11 with postgresql RDS instance by editing attributes in /opt/chef-server/embedded/cookbooks/chef-server/attributes/default.rb
default['chef_server']['postgresql']['vip'] = "rds instance endpoint"
and importing database with the following command
/opt/chef-server/embedded/bin/psql -h "rds instance endpoint" -p 5432 -U "user_name" "database_name" < /opt/chef-server/embedded/service/erchef/lib/chef_db-f086a97/priv/pgsql_schema.sql
But i am not able to achieve that. chef-server-ctl reconfigure gives an error
curl -sf http:// 127.0.0.1:8000 /_status returned 7
Please help me to configure chef server with RDS instance.

I think i am able to solve my problem. It is because of encrypted password in erchef config file. I edited
"/opt/chef-server/embedded/cookbooks/chef-server/templates/default/echef.config.rb"
for the same and it seems working perfectly fine now.
Thanks

The chef-server-rds cookbook on github can be used to install Chef Server 11 with AWS RDS.
Given an iam key and secret, it will provision the rds instance if it doesn't exist in the account, initialize the chef schema, and install the appropriate platform-specific chef-server Omnibus package and perform the initial configuration of Chef Server on an AWS elastic compute ubuntu instance.
Using postgres on Amazon RDS offloads DB resource use away from the chef-server host. It also enables various DB functions like scaling, backup, and restore to be done independently of the chef-server installations. Similar configurations can be written for other db service providers.

Related

Connect PostgreSQL to rabbitMQ

I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.
In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.

How to see/install pg_activity for the crunchy data postgres operator?

I have setup an Rancher (RKE) (kuberbetes) for my application.
and application using the postgres so i have setup Crunchydata postgres operator and create postgres cluster using that.
everything fine but now i want to see the pg_activity for my postgresql.
how i can see the activity of whole postgres ?
you use the monitoring tools in rancher to monitor the Postgres.
apart from that you can SSH inside the respective pod of the database and use the cli command and check the output.
In rancher, you can also use the client tool to connect with the rancher and run the cli command to check the pg_activity.
Client docker image : https://hub.docker.com/r/jbergknoff/postgresql-client/
you can also deploy the GUI docker client on rancher and use it
GUI postgress client : https://hub.docker.com/r/dpage/pgadmin4/
GUI Example : https://dataedo.com/kb/query/postgresql/list-database-sessions#:~:text=Using%20pgAdmin,all%20connected%20sessions%20(3).

Use hasura with Google Cloud Run and Google Cloud SQL

The docs describe that hasura needs the postgres connection string with the HASURA_GRAPHQL_DATABASE_URL env var.
Example:
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
hasura/graphql-engine:latest
It looks like that my problem is that the server instance connection name for google cloud sql looks like PROJECT_ID:REGION:INSTANCE_ID is not TCP
From the cloud run docs (https://cloud.google.com/sql/docs/postgres/connect-run) I got this example:
postgres://<db_user>:<db_pass>#/<db_name>?unix_sock=/cloudsql/<cloud_sql_instance_name>/.s.PGSQL.5432 but it does not seem to work. Ideas?
I'm currently adding the cloud_sql_proxy as a workaround to the container so that I can connect to TCP 127.0.0.1:5432, but I'm looking for a direct connection to google-cloud-sql.
// EDIT Thanks for the comments, beta8 did mostly the trick, but I also missed the set-cloudsql-instances parameter: https://cloud.google.com/sdk/gcloud/reference/beta/run/deploy#--set-cloudsql-instances
My full cloud-run command:
gcloud beta run deploy \
--image gcr.io/<PROJECT_ID>/graphql-server:latest \
--region <CLOUD_RUN_REGION> \
--platform managed \
--set-env-vars HASURA_GRAPHQL_DATABASE_URL="postgres://<DB_USER>:<DB_PASS>#/<DB_NAME>?host=/cloudsql/<PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>" \
--timeout 900 \
--set-cloudsql-instances <PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>
As per v1.0.0-beta.8, which has better support for Postgres connection string parameters, I've managed to make the unix connection to work, from Cloud Run to Cloud SQL, without embedding the proxy into the container.
The connection should look something like this:
postgres://<user>:<password>#/<database>?host=/cloudsql/<instance_name>
Notice that the client will add the suffix /.s.PGSQL.5432 for you.
Make sure you added also the Cloud SQL client permission.
If the Hasura database requires that exact connection string format, you can use it. However, you cannot use Cloud Run's Cloud SQL support. You will need to whitelist the entire Internet so that your Cloud Run instance can connect. Cloud Run does not publish a CIDR block of addresses. This method is not recommended.
The Unix Socket method is for Cloud SQL Proxy that Cloud Run supports. This is the connection method used internally to your container when Cloud Run is managing the connection to Cloud SQL. Note, for this method IP based hostnames are not supported in your client to connect to Cloud Run's Cloud SQL Proxy.
You can embed the Cloud SQL Proxy directly in your container. Then you can use 127.0.0.1 as the hostname part for the connection string. This will require that you create a shell script as your Cloud Run entrypoint to launch both the proxy and your application. Based on your scenario, I recommend this method.
The Cloud SQL Proxy is written in Go and the source code is published.
If you choose to embed the proxy, don't forget to add the Cloud SQL Client role to the Cloud Run service account.

How to connect to the database in ddev?

I installed successfully ddev for TYPO3 and now want to connect to the mariadb database. But what are the credentials? If I ssh into the container and want to connect I got a password prompt.
Access via external tools is described in Using Developer Tools with ddev.
Specifically you need to execute the following command to get the necessary credentials:
ddev describe
When upgrading my ddev and deleting all the containers, everything stayed the same except my new port number incremented up by one.
mariadb
Host: localhost:portNumberIncrementedByOne
User/Pass: 'db/db'

How can I connect a docker container in AWS to a RDS

How can I --link a docker container(odoo) running on a plain EC2 instance or via EB to a RDS in AWS? I tried loading a custom config file with the location of the server but that didn't work. In EB I created an app with a postgres DB, successfully deployed my docker.aws.json but I can not connect to web interface of the the application.
When I check the docker logs of the container it says everything started fine but expects the DB on localhost.
So like I said my question is how can I tell a docker container to --link to a RDS and not an other docker container à la --link db:db?
If you created a database in AWS Elastic Beanstalk, you can access it using environment variables that EB set for you. See examples in the documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-rds.html (Python example)
The environment variable containing the RDS DB hostname is called RDS_HOSTNAME. See example: https://github.com/awslabs/eb-demo-php-simple-app/tree/docker-apache
It's not possible to use a link as your RDS DB is not a Docker container.