Fusionauth-app docker without mysql superuser credentials - docker-compose

I would like to connect to a hosted remote mySQL DB (mariadb 10.1.39). I use the available fusionauth docker images (app and search) from docker hub and the published docker compose file. The db hosting provider does not grant superuser credentials. The assigned user rights should be sufficient to maintain the tables of the schema. Unfortunately, using the docker container mysql superuser credentials seems to be mandatory.
I imported the DB dump of a local (dockerized) mariadb (10.1.40) to the remote db. Username and schema name are the same local and remote. I tried not to provide DATABASE_ROOT_USER with the docker-compose yaml, but this approach ends in maintenance mode.
Is there an approach to connect to a remote mysql db without superuser credentials?

We will be enhancing our automated setup to better support external db service providers. See https://github.com/FusionAuth/fusionauth-issues/issues/95
Your current option is to create the schema manually. https://fusionauth.io/docs/v1/tech/installation-guide/fusionauth-app#advanced-installation
You may also try to use your user credentials in the superuser fields, it may work.

Related

pgAdmin4: How Can I Get All of my Heroku Postgres Databases to Display in pgAdmin?

I have several Heroku Postgres databases that I want to access using pgAdmin4. I'm using version 4.2. I used this link to learn how to hide the databases I don't have access to. I have all the databases entered in DB restriction. However when I connect to the Heroku database, only the maintenance account appears in the list of databases. I updated the password file in advanced properties with my .pgpass file. It still only lists the maintenance database.
I've searched here, the Database Administrators Stack Exchange, and the pgAdmin instructions but have not found anything.

Setting session_replication_role for GCP Cloud SQL

I am trying to run SET session_replication_role = 'replica'; in a GCP Cloud SQL Postgres 9.6 instance, however I'm encountering this error ERROR: permission denied to set parameter "session_replication_role" even if the postgres user is a cloudsql admin user. Do I have to spin up my own self managed instance to solve the problem or is there a way around it?
Unfortunately, it is not connected with the service is in Beta or not, you can't set session_replication_role in GCP Cloud SQL.
You need to have superuser privileges to do that operation, but GCP Cloud only allows to cloudsqlsuperuser privileges. It's features as follows:
When you create a new Cloud SQL for PostgreSQL instance, the default postgres user is already created for you, though you must set its password.
The postgres user is part of the cloudsqlsuperuser role, and has the following attributes (privileges): CREATEROLE, CREATEDB, and LOGIN. It does not have the SUPERUSER or REPLICATION attributes.
You can find much more information in this blog post.
From what I was looking at, since the service is currently in Beta, there are still some features that are not available, such as that. Therefore we would need to wait a bit more for Google to realease the final version of their product.
We also encountered same issue . This is because postgres user does not have Replication permission.
To resolve this issue:
a) Login with postgres user
b) Since postgres user has Create role permission. Now create a new user with below command:
CREATE USER <YOUR_USER> WITH PASSWORD '<YOUR_PASSWORD>' CREATEDB CREATEROLE REPLICATION IN GROUP cloudsqlsuperuser;
replace <YOUR_USER> with your user name and <YOUR_PASSWORD> with password.
c) Login with newly created user and run
SET session_replication_role = 'replica';
if you see response SET then you are good to go

What is the default dbuser and dbpassword for a MongoDB database provisioned by Heroku and MongoLab?

I am new to Heroku and MongoDB. I created a Heroku app which has an added-on MongoDB by MongoLab.
Everything was set up automatically by Heroku. When I navigated to MongoLab database manager page (SSO protected) it showed a standard MongoDB URL as:
mongodb://<dbuser>:<dbpassword>#dsxxxxxx.mongolab.com:39674/heroku_xxxxxxxx
Those "x" letters represents numbers.
I didn't bother to specify a dbuser and dbpassword at all. So what is the dbuser and dbpassword?
None of these answers are correct, if you want to know your URI to your database go to your project in heroku and look at settings, reveal config vars and you find all the URI
In your terminal, navigate to your project folder and type $ heroku config:get MONGODB_URI to get your Heroku provisioned username and password.
Mongolab provides you with database hosting services using MongoDB as the database engine. This means you have to have a subscription to their services, in order to have access to a MongoDB database. Once you sign up for one of their plans you will have your own database username and database password to authenticate database connections with.
So dbuser will be your MongoDb username and dbpassword will be your MongoDB password. You use these elements to gain access to your own databases and collections.
https://mongolab.com/plans/pricing/
When you create a MongoLab add-on for your Heroku app, a MONGOLAB_URI environment variable is automatically created with connection info for your database add-on:
https://devcenter.heroku.com/articles/mongolab#getting-your-connection-uri

Enabling geodatabase in AWS RDS PostgreSQL instance

Superuser permission is required to create a geodatabase in PostgreSQL.
However, in AWS RDS instance we are receiving:
rds_superuser permission and rds_superuser is not superuser.
Is there a way to enable geodatabase in AWS RDS PostgreSQL with rds_superuser permission?
You need to create the database and the sde login manually using e.g. pgAdmin, and grant the rds_superuser group role to the sde login. Also create a schema named sde in your database, and make the sde login the owner of that schema.
Then you can create a .sde database connection in ArcCatalog using the sde login and, importantly, the *.rds.amazonaws.com hostname. Finally you can run the Enable Enterprise Geodatabase using this connection as your input.
This only works if you connect to the database using the *.rds.amazonaws.com hostname. Apparently, ESRI uses the hostname to determine if the database in question is an RDS server.
Once you've enabled the geodatabase you can connect to it with .sde connections using other dns aliases as well.
Refer to the ESRI documentation for further details: http://server.arcgis.com/en/server/latest/cloud/amazon/create-geodatabase-in-amazon-rds-for-postgresql.htm

Postgres accepts any password

I have the following code which connects to a database on my remote server (the connection script resides on the same server):
Database::$ErrorHandle = new PDO('pgsql:host=111.222.33.44;dbname=mydatabase;', 'postgres', 'mypassword', $db_settings);
The problem is I can change the password to be anything at all and the connection is still made! Like seriously what the hell!?!
Can my database be connected to (providing you know the IP and db name) by anyone from a PHP script running on a different server?
How can I enforce passwords, I have looked at the following stack overflow page and did what they said but still no luck:
How to change PostgreSQL user password?
I am running Ubuntu 12.04 server with PHP 5.5 and Apache2
Off course your postgresql database can be properly configured to only connect with authenticated users even certain users (Roles in Postgres) from certain IPs/sockets.
Some considerations:
Do you see data? Or can you just connect to the server? Can you list the databases?
Look at your pg_hba.conf and setup the proper permissions, per role per database per source
Did you grant access to the mydatabase to everyone? Which roles did you grant access?
Does the database have its tables in the public scheme? And granted access to the public?
Yes, with this configuration everyone who knows your IP and database name can connect to your database.