I am trying to run latest Artifactory 7 OSS with docker-compose. And I do not want to use Derby database, but postgresql.
According to the documentation, I need to define the ENV variables:
environment:
- DB_TYPE=postgresql
- DB_USER=user
- DB_PASSWORD=mypass
- DB_URL=jdbc:postgresql://postgresql-docker:5432/database
That seems correct, but when launching the server, I get:
java.lang.RuntimeException: Driver org.apache.derby.jdbc.EmbeddedDriver claims to not accept jdbcUrl, jdbc:postgresql://postgresql-docker:5432/database
Seems that is still using the Derby driver and not the postgresql (that I expect by using the parameter DB_TYPE).
How can I force it to use postgresql? There is no DB_TYPE variable or similar on the documentation. Which ones are the correct parameters for Artifactory 7 OSS?
Ok, seems that I am following a deprecated documentation. Looking into the logs, the variables on the docker must now be:
- JF_SHARED_DATABASE_TYPE=postgresql
- JF_SHARED_DATABASE_USERNAME=user
- JF_SHARED_DATABASE_PASSWORD=mypass
- JF_SHARED_DATABASE_URL=jdbc:postgresql://postgresql-docker:5432/database
And searching more, now I have an extra variable:
JF_SHARED_DATABASE_DRIVER=org.postgresql.Driver
That is the one that solves the problem.
Related
I'm trying to follow the diesel.rs tutorial using PostgreSQL. When I get to the Diesel setup step, I get an "authentication method 10 not supported" error. How do I resolve it?
You have to upgrade the PostgreSQL client software (in this case, the libpq used by the Rust driver) to a later version that supports the scram-sha-256 authentication method introduced in PostgreSQL v10.
Downgrading password_encryption in PostgreSQL to md5, changing all the passwords and using the md5 authentication method is a possible, but bad alternative. It is more effort, and you get worse security and old, buggy software.
This isn't a Rust-specific question; the issue applies to any application connecting to a Postgres DB that doesn't support the scram-sha-256 authentication method. In my case it was a problem with the Perl application connecting to Postgres.
These steps are based on a post.
You need to have installed the latest Postgres client.
The client bin directory (SRC) is "C:\Program Files\PostgreSQL\13\bin" in this example. The target (TRG) directory is where my application binary is installed: "C:\Strawberry\c\bin". My application failed during an attempt to connect the Postgres DB with error "... authentication method 10 not supported ...".
set SRC=C:\Program Files\PostgreSQL\13\bin
set TRG=C:\Strawberry\c\bin
dir "%SRC%\libpq.dll" # to see the source DLL
dir "%TRG%\libpq__.dll" # to see the target DLL. Will be replaced from SRC
cp "%SRC%\libpq.dll" %TRG%\.
cd %TRG%
pexports libpq.dll > libpq.def
dlltool --dllname libpq.dll --def libpq.def --output-lib ..\lib\libpq.a
move "%TRG%"\libpq__.dll "%TRG%"\libpq__.dll_BUP # rename ORIGINAL name to BUP
move "%TRG%"\libpq.dll "%TRG%"\libpq__.dll # rename new DLL to ORIGINAL
At this point I was able successfully connect to Postgres from my Perl script.
The initial post shown above also suggested to copy other DLLs from source to the target:
libiconv-2.dll
libcrypto-1_1-x64.dll
libssl-1_1-x64.dll
libintl-8.dll
However, I was able to resolve my issue without copying these libraries.
Downgrading to PostgreSQL 12 helped
first of all I'm really sorry for my english. I started to build a fusionauth application on my Windows PC a few days ago. For this project I used a MariaDB. Now I buyed a vServer and my plan is to run Fusionauth with the help of docker.
After installing everything and following this tutorial: https://fusionauth.io/docs/v1/tech/installation-guide/docker
I had to change the .env file. But here you can only set a Username and Password for POSTGRES...
Don't really know what to do, because MariaDB should work with Fusionauth.
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
Would be grateful for every help!
MariaDB is no longer fully compatible with MySQL. Therefore, FusionAuth does not officially support MariaDB due to the fact that we are using modern MySQL functions and SQL. However, if you manage to get MariaDB working, post your solution to our forums (https://fusionauth.io/community/forum/) to let the community know.
We recommend using PostgreSQL for FusionAuth if possible, but MySQL also works. If you are going to use MySQL, you'll need to modify the Docker Compose file to use the MySQL Docker container instead of PostgreSQL.
The MySQL Docker container is documented here: https://hub.docker.com/_/mysql
Once you have MySQL running, you'll configure FusionAuth to connect to it using the environment variables that are documented here: https://fusionauth.io/docs/v1/tech/reference/configuration
I am fairly new to Eclipse Ditto and have just started using it for my project.
I am trying to connect Cloud hosted mongodb instance to ditto.
Following the documentation I know that I need to add some variables and pass them to docker-compose. The problem is that I do not know what should be the values of these variables as there are no examples.
Are all these variables necessary or will just the URI work?
This is my current .env file config
MONGO_DB_URI=mongodb+srv://username:pass#IP
MONGO_DB_READ_PREFERENCE=primary
MONGO_DB_WRITE_CONCERN=majority
The command I am using to start ditto is
docker-compose --env-file .env up
I have removed mongodb service from docker-compose.yml
Nice to hear that you started using Ditto in your project.
You need to set the following env variables to connect to your Cloud hosted MongoDB.
MONGO_DB_URI: Connection string to MongoDB
For more detailes see: https://docs.mongodb.com/manual/reference/connection-string/
If you have a ReplicaSet your MongoDB URI should look like this: mongodb://[username:password#]mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/?replicaSet=myRepl
I assume you also need to enable SSL to connect to your MongoDB.
To do so set this env var.
MONGO_DB_SSL_ENABLED: true
If you want to use a specific Ditto version you can set the following env var
DITTO_VERSION= e.g. 2.1.0-M3
If you use .env as file name you can start Ditto with:
docker-compose up
The other options for pool size, read preference and write concern aren't necessary as there are default values in place.
I am attempting to get the postgreSQL logs from an RDS instance that is using version 10.6, which is set up in a cloudformation template. When I run it through out system I'm getting the error message
You cannot use the log types 'Postgresql' with engine version postgres 10.6. For supported log types, see the documentation.
The documentation seems pretty straight forward in what it asks for. A list of strings to for the parameters, and postgreSQL supports both Postgresql log and Upgrade log. I know this to be true as I am able to export these logs though the AWS console. The documentation doesn't mention what strings are expected. So I've tried 'postgres', 'postgresql', 'postgresql_log', and so on but nothing is catching. I'm sure I must be missing something important but I can't find it, and the only example I have found on the internet ahsn't been able to enlighten me.
RDSInstance:
Type: AWS::RDS::DBInstance
DependsOn: RDSMonitoringRole
Properties:
****
EnableCloudwatchLogsExports:
- Postgresql
MonitoringInterval: 60
MonitoringRoleArn: !GetAtt ["RDSMonitoringRole", "Arn"]
****
RDSMonitoringRole:
Type: AWS::IAM::Role
Properties:
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonRDSEnhancedMonitoringRole
AssumeRolePolicyDocument:
Version: '2008-10-17'
Statement:
-
Effect: Allow
Principal:
Service: 'monitoring.rds.amazonaws.com'
Action: 'sts:AssumeRole'
The docs are quiet confusing, but if you read carefully you can see that
under Publishing PostgreSQL Logs to CloudWatch Logs it is written that:
You can publish the following log types to CloudWatch Logs for RDS for
PostgreSQL:
Postgresql log
Upgrade log (not available for Aurora PostgreSQL)
When navigating to the AWS CLI examples:
1 ) You can see under: Example Modify an instance to publish logs to CloudWatch Logs:
The key for this object is EnableLogTypes, and its value is an array
of strings with any combination of postgresql and upgrade.
2 ) Further on under: Example Create an instance to publish logs to CloudWatch Logs:
The strings can be any combination of postgresql and upgrade.
For those who use Aurora PostgreSQL - taken from here:
Be aware of the following: Aurora PostgreSQL supports publishing logs
to CloudWatch Logs for versions 9.6.12 and above and versions 10.7 and
above.
From Aurora PostgreSQL, only postgresql logs can be published.
Publishing upgrade logs isn't supported.
For anyone else, I also struggled with this, the docs weren't really helpful either.
Until I opened the "AWS CLI" section:
aws rds modify-db-instance \
--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration '{"EnableLogTypes":["postgresql", "upgrade"]}'
And those values worked for me.
I see that you mentioned you already tried them, so then maybe you ran into the version limitation?
Publishing log files to CloudWatch Logs is only supported for PostgreSQL versions 9.6.6 and above and 10.4 and above.
Pretty late but might help someone else:
According to docs https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-enablecloudwatchlogsexports
RDSInstance:
Type: AWS::RDS::DBInstance
DependsOn: RDSMonitoringRole
Properties:
****
EnableCloudwatchLogsExports:
- postgresql
MonitoringInterval: 60
MonitoringRoleArn: !GetAtt ["RDSMonitoringRole", "Arn"]
****
I have a rails-app which uses both mongoid and mongo. I use mongoid for my own models, and I use mongo because I have ruote with a ruote-mon storage.
In production however; I get
Mongo::ConnectionFailure: Failed to connect to a master node at localhost:27017
when I try to connect to the ruote storage. Even when I just do Mongo::MongoClient.new
Steps I have taken so far to try to resolve this:
I have made my mongodb an explicit master by setting master = true in /etc/mongod.conf
There are no $ENV variables set that could intervene with Mongo::MongoClient.new (double checked)
I have tried to connect using Mongo::MongoClient.new(:slave_ok => true) - same error
I have restarted my mongo database several times (w/o success).
I have checked my firewall settings and I can connect to localhost:27017 with telnet (as said, the mongoid documents can be fetched and stored w/o issue)
I am out of my wits... Any suggestions?
The reason this happened is because we were sending queries with meta operators ($query, $orderby, etc...) for the ismaster command during a connect. This command's output is used to determine whether you are connected to a primary or not and would fail because very old versions of mongodb don't support the use of meta operators.
This fix will be in version 1.8.2 of the gem but I strongly encourage anyone who is still running pre-1.8 versions of mongodb to upgrade. 2.0 is the current legacy release as of the time of this post and even 1.8 is no longer widely supported.
As jmettraux mentioned you can find more details about this on the MongoDB project Jira under Ruby-525
please look at: https://jira.mongodb.org/browse/RUBY-525
Should be fixed by the 1.8.2 mongo gem.