Heroku: Enabling MTLS on postgres... failed! Not Found - postgresql

I am trying to enable mutual MTLS on my PostgreSQL database, however whenever I run the command heroku data:mtls:create DATABASE_NAME --app APP_NAME (With my own DATABASE_NAME and APP_NAME obviously) I get the error Enabling MTLS on DATABASE_NAME... failed! Not Found. I am not sure why I am getting this error because I installed the MTLS plugin just fine from the same command line, I am logged in to heroku and when I run heroku apps my APP_NAME comes up just fine. If anybody could help me out with this that would be amazing!

Related

GCloud auth login: Ports 8085 and 8184 possible blocked

I have installed and uninstalled and reinstalled GCloud on MACOS Monterey (Chipset M1) and I'm facing the next situation: When I run in Terminal gcloud auth login, it displays the next message:
WARNING: Failed to start a local webserver listening on any port between 8085 and 8184. Please check your firewall settings or locally running programs that may be blocking or using those ports.
WARNING: Defaulting to --no-browser mode.
You are authorizing gcloud CLI without access to a web browser. Please run the following command on a machine with a web browser and copy its output back here. Make sure the installed gcloud version is 372.0.0 or newer.
I have tried in many ways to install: The last one was this:
curl https://sdk.cloud.google.com | bash
exec -l $SHELL #restart shell
But I still facing that message.
Anybody couls help me with this?
This happens because my Internet provider has blocked these ports. There will be to make some fixes to the router.
Patch solution for this:
gcloud auth login --no-launch-browser
Follow the instructions given on Terminal

creating a postgresql database back end for a new Label Studio project

I am creating a local Label Studio server to host images to annotate in our office. I would like the database back end to be postgresql and not sqlite and be located in a particular directory, not the default and not the same as the 'data-dir'. I have got a test server working across the network with various machines annotating images on the server, but the backend was sqlite for this test.
Everything I've tried to get a postgresql backend db has failed for various reasons. Some commands result in a sqlite db (occasionally with the name 'postgresql') located in my required directory; others create postgres/pyscopg2 errors but I think they're up a garden path.
The host machine is running Ubuntu 20.04 LTS. And serves another postgresql db over the network using other APIs. Postgresql version running is 12.9.
I have created a conda environment and pip installed Label Studio as the documentation suggested.
Here's what I've tried:
Start the conda environment. Follow instructions to assign environment variables from https://labelstud.io/guide/storedata.html#PostgreSQL-database which at time of writing is:
DJANGO_DB=default
POSTGRE_NAME=postgres
POSTGRE_USER=postgres
POSTGRE_PASSWORD=
POSTGRE_PORT=5432
POSTGRE_HOST=db
Then a few variations on the start command (I didn't include the backslashes, just put here for readability/comparability):
label-studio start --init \
-db postgresql \
--database /path/to/label-studio/databases/newdb \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: db is where expected, but:
file newdb
gives "newdb: SQLite 3.x database, last written using SQLite version 3038002"
label-studio start --init \
--database /path/to/label-studio/databases/newdb \
-db postgresql \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: a db at specified path named 'postgresql' and still an sqlite db. This seems to mirror the mistake mentioned at: https://github.com/heartexlabs/label-studio/issues/1660
I have also tried the above two commands with the '--init' argument omitted with same results.
Then I tried adding something on the front of the command suggested at the same link above:
DJANGO_DB=default label-studio start \
--database /path/to/label-studio/databases/newdb \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: psycopg2.OperationalError: FATAL: password authentication failed for user "postgres"
FATAL: password authentication failed for user "postgres"
DJANGO_DB=default POSTGRE_PASSWORD= label-studio start \
--database /path/to/label-studio/databases/newdb \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: psycopg2.OperationalError: fe_sendauth: no password supplied
Any help and resolution would be highly appreciated.
Also, I can't tag this with 'label-studio' because I'm not quite at the required reputation to create a new tag, so if anyone who can feels like doing so, pleaseandthankyou!
Your last option was closer than all the others. Have you tried to run LS using this:
DJANGO_DB=default POSTGRE_NAME=<postgres_name> POSTGRE_USER=<postgres_user> POSTGRE_PASSWORD=<password> POSTGRE_PORT=<db_port> POSTGRE_HOST=<db_host> label-studio
Sure, you have to run postgres service by yourself, configure it properly, create the DB <postgres_name>, the user <postgres_user> and set the password <password>, grant access rights to this user. Also don't forget to specify <db_host> (localhost?), <db_port> (5432?)

How to reset the DB on each deployment on heroku?

So currently I'm working on a project on Heroku with Drupal and my issue is that I want to reset the database each time I deploy to master, yes I know it not ideal but its a development env because I'm working Drupal plugin and it would be nice if changes happened it could just reset to a state.
But when I try to connect using psql and some variables I just get password authentication failed for user even tho I know its the right password because I got it from Heroku itself.
Currently, I have tried using the console to try to make in connection soi could run a DROP TABLE command for me to afterword import an SQL file with the basic setup using pg_dump, and put it into a .sh script and run it with and release: in a procfile
Until now I have this as a release.sh file where I only tried in the console on heroku
PGHOST=HOST PGPORT=5432 \
PGDATABASE=DB \
PGUSER=USER PGPASSWORD=SOMEPASS \
psql
Try below command to reset DB
heroku pg:reset DATABASE_URL

Password authentication failed error on running laravel migration

I am trying to deploy my Laravel app on a DigitalOcean droplet. The droplet is setup with nginx, php7, and postgres, I follow up the tuts from DigitalOcean on how to set them up. Then I try to follow this tutorial on how to deploy the Laravel app using the git hook and so on.
Now the app itself is up and running, I can access the pages and all. But I can't run php artisan migrate. I have been changing the database username, name, password on .env file, but I always get the exact same error:
[Illuminate\Database\QueryException]
SQLSTATE[08006] [7] FATAL: password authentication failed for user "deploy"
FATAL: password authentication failed for user "deploy" (SQL: select * from information_schema.tables where table_schema = apollo and table_name = migrations)
[Doctrine\DBAL\Driver\PDOException]
SQLSTATE[08006] [7] FATAL: password authentication failed for user "deploy"
FATAL: password authentication failed for user "deploy"
[PDOException]
SQLSTATE[08006] [7] FATAL: password authentication failed for user "deploy"
FATAL: password authentication failed for user "deploy"
Here is my latest .env config for the database:
DB_CONNECTION=pgsql
DB_HOST=127.0.0.1
DB_PORT=5432
DB_USERNAME=postgres
DB_DATABASE=postgres
DB_PASSWORD=[my password]
DB_SCHEMA=public
As you can see, what is so absurd is that even with the DB_USERNAME is set to postgres, the error will still say for user "deploy".
I have been googling around and the closest thing, or I thought so, is to update some configuration on /etc/postgresql/9.5/main/postgresql.conf, which is to make listen_addresses = '*'. I updated it, restarted postgres service, and still get the exact same error.
Anyone can help me to point out what did I miss?
Thanks.
This happens due to caching.
When you run, php artisan config:cache, it will cache the configuration files. Whenever things get change, you need to keep running it to update the cache files. But, it won't cache if you never run that command.
This is OK for production, since config don't change that often. But during staging or dev, you can just disable caching by clearing the cache and don't run the cache command
So, just run php artisan config:clear, and don't run the command previously to avoid caching

How can I restart my openshift postgres cartridge?

My app has fallen over as it can't connect to the postgres DB and when I try to connect to the DB via ssh and psql I get the following message:
psql: could not connect to server: Connection refused
Is the server running on host "<GEAR_ID>-<NAMESPACE>.rhcloud.com" (<IP_ADDRESS>) and accepting
TCP/IP connections on port <PORT_NUMBER>?
Running rhc app show --state prints:
Cartridge jbossas-7, haproxy-1.4 is started
Cartridge postgresql-9.2 is started
also, running rhc app show shows nothing unusual.
I can't telnet to the above IP_ADDRESS & POST_NUMBER, which kinda looks like communication has been broken between the 2 gears.
Any ideas?
I had the same problem. Using pg_ctl instead of the rhc commands fixed it for me.
$ rhc ssh <appname>
[...rhcloud.com ...]\> pg_ctl restart
pg_ctl: old server process (PID: 20034) seems to be gone
starting server anyway
server starting
To restart your entire application:
rhc app restart <app_name>
TO restart just your postgresql cartridge:
rhc cartridge restart <cart_type> --app <app_name>
You can get the cart type by running
rhc app show <app_name> --gears
And looking for the cartridge name under the "cartridges" heading
Ok, so I managed to work around this issue but man it was a PITA.
Since I couldn't find any useful help on the web around this problem, I've ended up creating a new app based on my old one, and using pg_dump and psql to save and restore the db from the old application into my new app.
I'm still none-the-wiser as to why the original app was no longer able to communicate from the main jboss gear to the postgresql gear, even though the postgres server was up and running.
Perhaps (hopefully) someone from openshift will want to look into this. If so I'll keep my old broken app around for a while.