Running MariaDB instead of Postgresql on Fusionauth Docker - postgresql

first of all I'm really sorry for my english. I started to build a fusionauth application on my Windows PC a few days ago. For this project I used a MariaDB. Now I buyed a vServer and my plan is to run Fusionauth with the help of docker.
After installing everything and following this tutorial: https://fusionauth.io/docs/v1/tech/installation-guide/docker
I had to change the .env file. But here you can only set a Username and Password for POSTGRES...
Don't really know what to do, because MariaDB should work with Fusionauth.
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
Would be grateful for every help!

MariaDB is no longer fully compatible with MySQL. Therefore, FusionAuth does not officially support MariaDB due to the fact that we are using modern MySQL functions and SQL. However, if you manage to get MariaDB working, post your solution to our forums (https://fusionauth.io/community/forum/) to let the community know.
We recommend using PostgreSQL for FusionAuth if possible, but MySQL also works. If you are going to use MySQL, you'll need to modify the Docker Compose file to use the MySQL Docker container instead of PostgreSQL.
The MySQL Docker container is documented here: https://hub.docker.com/_/mysql
Once you have MySQL running, you'll configure FusionAuth to connect to it using the environment variables that are documented here: https://fusionauth.io/docs/v1/tech/reference/configuration

Related

Is there a way to copy a postgres db to heroku?

I have a django project that I was initially running on pythonanywhere using a postgres database. However, pythonanywhere doesn't support ASGI, so I am migrating the project over to heroku. Since heroku either much prefers or mandates use of its own postgres functionality, I need to migrate the data from elephantsql. However, I'm really coming up short on how... I tried running pg_dumpall but it didn't seem to want to work and/or the file disappeared into the ether. I'm at a loss for how to make this migration... If anyone could help, I'd appreciate it so much. T-T
After hours of searching and doing what I can to scour heroku's listed info, I found it by running heroku pg:push --help.
For a locally running server, run
heroku pg:push '<db_name>' <heroku_db_name> --app <app_name>
For a hosted one, run
heroku pg:push <postgres_link> <heroku_db_name> --app <app_name>

Unable to use Postgis on Bluemix Compose for Postgresql

I have a Compose for Postgresql service on IBM Bluemix which isn't allowing me run PostGIS functions on my cloud foundry rails app. I have run "CREATE EXTENSION PostGIS;" and I have also added the adapter to database.yml. Compose for Postgresql says PostGIS comes installed by default
I am using Ruby on Rails with the rgeo gem and the error is
ERR NoMethodError: undefined method `st_point' for #
Can you please let me know if there is anything I need to do to get PostGIS working?
Please raise a support request asking for the postgis plugin to be enabled on your compose instance.
Answered my own question. The problem was with the rgeo gem and the adapter. I needed the postgis:// adapter for working with the gem.
Bluemix does not allow you to change the adapter in their connections. It will always be postgresql. To get around this I set a CUSTOM_DATABASE_URL environment variable with the connection string postgis://<username>:<password>#host:port/<db_name>. Using the cf client this would look like
cf set-env <app-name> CUSTOM_DATABASE_URL postgis://<username>:<password>#host:port/<db_name>
Then in the command for my container in the manifest.yml, I prepended setting DATABASE_URL = CUSTOM_DATABASE_URL, specifically
DATABASE_URL=$CUSTOM_DATABASE_URL &&.....
Its a workaround for now until Bluemix allows us to change the adapter in the connections.

Ambari doesnt start after postgresql Upgrade

We have a four-node Hadoop cluster with HDP 2.4 and Kerberos installed in it. As this is our production cluster, We wanted to have HA for all the services including the PostgreSQL database which is used by Hive, Ambari, and Oozie for storing the metadata. However, the version of our postgreSQL, which is 8.4.2 doesn't support the inbuilt feature(stream replication) of Postgres.
So, we have decided to upgrade PostgreSQL to a version(9.3) ambari supports.
I followed this link to upgrade the Postgres. Everything went well. Expect that, we are getting the following error when restarting ambari server.
Ambari Server running with administrator privileges.
Running initdb: This may take upto a minute.
Data directory is not empty!
[FAILED]
Could someone help?
Thanks.
Your server want's to initilize the Database. I guess your Server does not see the Ambari DB. Use ambari-server setup zu restore the database Connection. Than the sever should start perfectly.
I found the fix for the issue here.

Connect Symfony project to docker database

I'm currently working on a project, that works with bag-files.
Therefore I'm using a tool called bag-database(https://github.com/swri-robotics/bag-database), where the database is running as two docker containers.
I followed the instructions on the site really closely and got it up and running on port 8080. The other container is running on port 5432.
So now I'm kind of having trouble to connect the Symfony project with the database.
I used port 5432 in the config file and then ran php app/console doctrine:database:create and it created a new postgres database, but it was empty.
So my question is: How can I get all the tables and columns from the bag database to be able to map them properly in the project? Or is it not possible to use the tool in that way?
Any help is really appreciated!
when you ran doctrine:database:create it only created the database as thats what its supposed to do.
What you need is a schema.
As you have an existing database, you will need to reverse engineer the schema from your existing database tables etc.
Fortunately, Symfony already thought of that, and has a set of commands that you can use to do that.
Make sure you check the resulting classes carefully though, I've not used it in a while, but when I have it is possible for it to make mistakes.

Changing default version of postgresql

I have very frustrating issue...
I have two versions of postgresql installed - 8.4 and 9.1, I have dump of a database made on another machine with postrgresql-9.1 and I want to restore it.
Of course using postgresql-8.4 to do this I get:
unsupported version (1.12) in file header
I know that there is a lot questions about same issue, but everywhere the answer is - "change port in conf" or something like this.
But it doesn't help me - I have stopped postgresql-8.4 server at all, but I still can't restore database. And when I log to postgres user in ubuntu and type psql --version the output goes:
psql (PostgreSQL) 8.4.22
So, my question is how to change this behavior and force user postgres to use tools of postgresql-9.1?
And please don't tell me about port - when I type service postgresql status I get:
Running clusters:
Running clusters: 9.1/main