MongoDB Connection Error - mongodb

When i try to connect to mongodb (after setting Database, Username, Password, Server and Port values) with FDConnection i get [FireDAC][Phys][Mongo]The authentication mechanism "SCRAM-SHA-1" is not supported.. error.
If I set UseSSL option to True and try again this time I get [FireDAC][Phys][Mongo]SSL is not enabled in this build of mongo-c-driver.. error.
In the same computer I can connect to MongoDB with MongoBooster (with Basic Authentication)

SCRAM-SHA-1 authentication mechanism
Upgrade to the latest mongo-c-driver (or at least version 1.1.0) because the one you use does not support SCRAM-SHA-1 authentication mechanism (which is what the driver exception says). And since it was introduced and became default authentication mechanism in MongoDB 3.0, you must be using old driver against new DBMS.
MONGODB-CR authentication mechanism
Upgrade to the latest mongo-c-driver (or at least version 1.1.0) for reason desribed above. Since MONGODB-CR is no longer default authentication mechanism, you need to explicitly specify it by setting authMechanism to MONGODB-CR.
So if you want basic authentication, you need to specify this in the MongoAdvanced parameter:
...
FDConnection1.Params.Add('MongoAdvanced=authMechanism=MONGODB-CR');
FDConnection1.Connected := True;
Evidence about outdated driver
Following links point to corresponding source code lines where you can find evidence about your outdated driver:
version 1.0.2 - The authentication mechanism "SCRAM-SHA-1" is not supported message will be returned due to missing SCRAM-SHA-1 mechanism.
version 1.1.0 - no such message will be shown, because SCRAM-SHA-1 mechanism exists.
version 1.6.3 - in current version, the message sounds Unknown authentication mechanism.
So, that message The authentication mechanism "SCRAM-SHA-1" is not supported you can get with driver version older than 1.1.0. Build the driver from the latest stable source and set the library path to the VendorLib property of your physical driver link component.
Something like this (don't be confused by the version in the library name, authors keep it outdated, but it may change in the future):
FDPhysMongoDriverLink1.VendorLib := 'C:\PathToDriver\libmongoc-1.0.dll';

Related

Subject does not have subject-level compatibility configured

We use Kafka, Kafka connect and Schema-registry in our stack. Version is 2.8.1(Confluent 6.2.1).
We use Kafka connect's configs(key.converter and value.converter) with value: io.confluent.connect.avro.AvroConverter.
It registers a new schema for topics automatically. But there's an issue, AvroConverter doesn't specify subject-level compatibility for a new schema
and the error appears when we are trying to get config for the schema via REST API /config: Subject 'schema-value' does not have subject-level compatibility configured
If we specify the request parameter defaultToGlobal then global compatibility is returned. But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ.
How can I specify subject-level compatibility when registering a new schema via AvroConverter?
Last I checked, the only properties that can be provided to any of the Avro serializer configs that affect the Registry HTTP client are the url, whether to auto-register, and whether to use the latest schema version.
There's no property (or even method call) that sets either the subject level or global config during schema registration
You're welcome to check out the source code to verify this
But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ
Doesn't sound like a Connect problem. Create a PR for AKHQ project to fix the request
As of 2021-10-26, I used akhq 0.18.0 jar and confluent-6.2.0, the schema registry in akhq is working fine.
Note: I also used confluent-6.2.1, seeing exactly the same error. So, you may want to switch back to 6.2.0 to give a try.
P.S: using all only for my local dev env, VirtualBox, Ubuntu.
#OneCricketeer is correct.
There is no possibility to specify subject-level compatibility in AvroConverter unfortunately.
I see only two solutions:
Override AvroConverter to add property and functionality to send an additional request to API /config/{subject} after registering the schema.
Contribute to AKHQ to support defaultToGlobal parameter. But in this case, we also need to backport schema-registry RestClient. Github issue
The second solution is more preferable till the user would specify the compatibility level in the settings of the converter. Without this setting in the native AvroConverter, we have to use the custom converter for every client who writes a schema. And it makes a lot of effort.
For me, it looks strange why the client cannot set up the compatibility at the moment of registering the schema and has to use a different request for it.

Firebird connection string not working post Firebird 3 migration

I have a regression with a TCP\IP connection string post a firebird 3 migration from v2.5. The FirebirdClient version is 4.6.1 but I've tested with the latest stable version and it also doesn't work (v7.10.1).
The error message is "Your user name and password are not defined. Ask your database administrator to set up a Firebird login".
The stacktrace:
at FirebirdSql.Data.FirebirdClient.FbConnectionInternal.Connect()
at FirebirdSql.Data.FirebirdClient.FbConnectionPoolManager.Pool.GetConnection(FbConnection owner)
at FirebirdSql.Data.FirebirdClient.FbConnectionPoolManager.Get(ConnectionString connectionString, FbConnection owner)
at FirebirdSql.Data.FirebirdClient.FbConnection.Open()
The user was created via the IBExpert UI.
Here's how the connection string looks (not real life connection data obviously):
#"Database=inet://10.000.0.000:3050/C:\Database.FDB;User=MY_USER;Password=secret";
The same user works if using a standard same network connection string as below:
#dialect=3;initial catalog=C:\Database.FDB;data source=localhost;user id=MY_USER;password=secret;character set=ISO8859_1;pooling=True;connection lifetime=30;server type=Default;port number=3050
My firebird.conf is set like so:
ServerMode = Super
DefaultDbCachePages = 100K
FileSystemCacheThreshold = 100M
TempBlockSize = 2M
TempCacheLimit = 4000M
AuthServer = Legacy_Auth, Srp, Win_Sspi
AuthClient = Legacy_Auth, Srp, Win_Sspi
UserManager = Legacy_UserManager, Srp
WireCrypt = Enabled
RemoteServicePort = 3050
LockMemSize = 30M
LockHashSlots = 30011
RemoteAccess = true
Not sure what I'm missing here. The connection string above works with SYSDBA. According to the firebird documentation I've read it looks fine. I've read all other stackoverflow tickets with the same issue but don't see any answers that work for me. Any ideas?
Recent versions of FirebirdSql.Data.FirebirdClient support the version 13-15 wire protocol of Firebird 3, and then only support Srp authentication. Your old version supported only up to the v12 protocol (Firebird 2.5) and then would use the legacy authentication. If you created the user using the Legacy_UserManager (the default in your configuration), then you cannot authenticate with version 7.10.1 (where you could with 4.6.1), because as far as the Srp authentication plugin is concerned, the user does not exist.
It looks like you created the user either using gsec, which always applies the default user manager (FYI, gsec is deprecated since Firebird 3), or you used CREATE USER without USING PLUGIN Srp (or with USING PLUGIN Legacy_UserManager). You can verify this by checking the output of select sec$user_name, sec$plugin from sec$users. The solution would be to drop the user and then create it again with the right user manager (USING PLUGIN Srp).
Note that in theory you could have the user both for Srp and Legacy_UserManager (e.g. if the same user needs to be used by an application that cannot authenticate with Srp), but it is far more secure to have the user only exist for one plugin.
On a related note, the configuration you have applied is insecure. It is far more secure to leave out Legacy_Auth of the AuthServer setting or - if you still have applications that cannot apply Srp - to put it last (for both AuthServer and AuthClient). Similarly, it is recommended to put Legacy_UserManager last in UserManager (or leave it out entirely), so by default - if you use gsec, or don't include USING PLUGIN xxx in CREATE USER - it will create more secure Srp-type users.

Authentication DB setting not working using mongoDB URI configuration

I am triyng to connect pyeve with a MongoDB Atlas replica set (https://cloud.mongodb.com/). I've connected successfully DB management tools from the same host, to make sure the deployment is working OK.
One particularity is that using Atlas, all users must authenticate against auth database, I cannot put my users in the application database, so I need to set authSource in MONGO_URI.
Now, when defining the MONGO_URI for the replica set, in settings.py, like this:
MONGO_URI = mongodb://<USER>:<PASS>#my-shard-00-00-tlati.mongodb.net:27017,my-shard-00-01-tlati.mongodb.net:27017,my-shard-00-02-tlati.mongodb.net:27017/<MY_DB>?ssl=true&replicaSet=my-shard-0&authSource=admin
The authSource=admin parameter seems to be ignored, (I've checked debugging pymongo's auth and the authentication source used is None).
MONGO_AUTH_SOURCE could be used to set the authorization database, but it has no effect since MONGO_URI is used in preference of the other configuration variables, according to eve's documentation.
Is this an issue or am I doing it wrong?
Found out that the problem was that I was using version 0.4.1 for flask-pymongo. Updating it to version 0.5.1 fixed the problem.

Couchbase 4.1 - N1QL by endpoint

I use Reactive Couchbase (this is Scala port for Java SDK - https://github.com/ReactiveCouchbase/ReactiveCouchbase-core)
And for query this use http endpoint (http:// mycouchbaseadress:8093 /query?q=N1QL Comand) but response for server is "Unrecognized parameter in request: q".
I Find in stackoverflow to start cbq-engine so try to launch 'cbq-engine -couchbase http:// mycouchbaseadress:8093 /' but have error ''flag provided but not defined: -couchbase"
My couchbase version is 4.1 community
Do you know how I can send my n1ql query to server by endpoint?
It seems like there is a bug in ReactiveCouchbase, or at least its N1QL support was developed against an outdated beta version of the feature.
With Couchbase Server 4.0 GA and above, you don't need to run cbq-engine (this was the process used during N1QL's beta).
The problem is that in the code, the q= parameter is used where it should now be statement= (or a JSON body).
There is a open pull-request that happens to fix that issue among other things, but it's been opened a long time.

The authentication mechanism "SCRAM-SHA-1" is not supported

when i use mongoc_client_new to user authentication, produce the error,
The authentication mechanism "SCRAM-SHA-1" is not supported.
what's the problem
Your mongoc driver is trying to connect to a v3 mongo server while your mongoc driver was not linked with openSSL (with MONGOC_ENABLE_SSL).
see _mongoc_cluster_auth_node() function in mongoc-cluster.c
First: Which language are you using? If it's c++, the error may be caused by the lack of initialization. A simple
using mongo::client::initialize;
Status status = initialize();
solved it for me.