MongoDB user lost - mongodb

We have a Mongo Database for testing purposes on a cloud server.
Recently, this server almost run out of space (97% disk used), and that resulted in Mongo writes failing. I decided to resize the server to have more free space.
Important detailed that i set the auth parameter in the config to true, so each clients had to auth before using the db. I thought this is normal. I created a user with the following command, which worked:
db.addUser( { user: "username",pwd: "password",roles: [ "userAdminAnyDatabase" ] } );
Now, what happened, that when the resize happened and mongo restarted, i cannot get any reads / writes to the database, only if i set the auth = false parameter in the config. I couldn't even add a user from localhost.
The other interesting thing was that i switched off auth and recreated the same user - it succeeeded, which means the user got lost!
Ok, i have lost the user after a restart. That's bad. What's worse that still, i can't get this user to auth from the remote clients.
I have no idea why is this happening, what went wrong.
The data, which is originally intended to create still exists, count() returns 111090914, which about what is expected. I can also do find(), so that data is OK.

Related

PERMISSION_DENIED: Missing or insufficient permissions... and I change nothing

I know this question has been asked a few times, but I couldn't find any working answer.
I'm using Google App Engine and Firestore (Database location: eur3 - europe-west).
A few days ago, my simple app was working like a charm and more precisely Firestore was working.
I could test/use it locally through nodemon or online (when deployed).
Since this morning, and I change nothing, it doesn't work anymore locally.
I can't access my DB locally anymore, but I can when deployed to GAE.
Locally I got this error and this is a DB access issue:
PERMISSION_DENIED: Missing or insufficient permissions. {"code":7,"details":"Missing or insufficient permissions.","metadata":{}}
My code to access DB has always been as simple as:
// service/firestore.js
const Firestore = require("#google-cloud/firestore");
const db = new Firestore();
module.exports.db = db;
Do you have any clue why I can't write/read locally to Firestore this morning?
(BTW I've never modified security rules and it has always been working).
Thanks

Azure database works on localhost, but not when used with azure service app

So I've been trying to publish my first project to azure. I've got everything set-up, a service app and a sql database.
My initial page loads properly(It's the standard view for a .net core web application).
The first thing I need to do is register a new user. Whenever I try through my azure app (myapp.azurewebsites.net) it fails and the logs says it's db related.
However I try the same thing by running the application on my machine in production environment, again connected to the azure sql server and everything works perfectly. I can register users, I can create posts, I can edit them. The allow access to azure services option is turned on. This error is from the eventlogs. I have not included the stacktrace.
Category: Microsoft.EntityFrameworkCore.Query EventId: 10100 RequestId: 800001be-0000-ba00-b63f-
84710c7967bb RequestPath: /Identity/Account/Register SpanId: |1e5a93ae-43f424904f38ea9f. TraceId:
1e5a93ae-43f424904f38ea9f ParentId: ActionId: c3430236-e61c-4785-a3c3-4f60ba115b6e ActionName:
/Account/Register An exception occurred while iterating over the results of a query for context type
'MyApp.Data.ApplicationDbContext'. Microsoft.Data.SqlClient.SqlException (0x80131904): Server
name cannot be determined. It must appear as the first segment of the server's dns name
(servername.database.windows.net). Some libraries do not send the server name, in which case the
server name must be included as part of the user name (username#servername). In addition, if both
formats are used, the server names must match.
Those are the different ways I tried to add the connection string to the appsettings.json file. (Server name, catalog, user and password have been replaced, they are written correctly in the appsettings file)
Server=tcp:servername.database.windows.net,1433;Initial Catalog=db;Persist Security Info=False;
User ID=user#server;Password=mypassword;
MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;
Server=tcp:servername.database.windows.net,1433;Initial Catalog=db;Persist Security Info=False;
User ID=user;Password=mypassword;
MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;
Data Source=tcp:server.database.windows.net,1433;
Initial Catalog=db;User Id=#server.database.windows.net;Password=password;
Alright so after a day and a half, I finally managed to fix it. The solution is rather simple and it is most likely my newbie mistake, that caused so much trouble.
I was following a tutorial for setting up the application and database connection after that. In the tutorial, the connection string that was being used, was the default one, found in the "myApp -> Configuration -> Connection strings", the format was:
Data Source=tcp:server.database.windows.net,1433;
Initial Catalog=db;User Id=#server.database.windows.net;Password=password;
This one was working in the guide, but not for me. So what I did, was go to my "sqldb -> connection strings" and copied the one provided there. I then went back to the app configuration and added it as a new configuration string using SqlServer as the Type.
This string was in the format:
Server=tcp:servername.database.windows.net,1433;Initial Catalog=db;Persist Security Info=False;
User ID=user;Password=mypassword;
MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;
After that, the app started working properly.

Dgraph: Can't see API data in Console

I'm running a test Dgraph instance in a dgraph/standalone Docker container, using the github.com/dgraph-io/dgo/v200/protos/api API on port 9080 to write data, but can't see the changes in the Console on port 8000. Using the API to query the previously written data works fine, so I wonder if the API and the Console are somehow using different name spaces?
Are you committing the transaction? I have seen users complaining about this, but they forgot to commit the txn.

KeyCloak bulk update through PSQL db

I've updated all my users to email_verified = true. The PSQL database gets updated, but the admin console continues to have the users as not having their emails verified. I'm doing the changes through the CLI on Rancher.
The command I am using is:
UPDATE user_entity SET email_verified = true WHERE email_verified = false
The only help I was able to see on here was from Bulk update of users in KeyCloak.
Is there more complexity to updating users in bulk?
Is there other ways to mass updating users?
My guess is that the old data is still around in Keycloak's cache. Some options are:
Restart Keycloak
Clear the cache
Turn off caching permanently
For #2, you can clear the user or realm caches at runtime on the "Realm Settings -> Cache" section of the keycloak admin page:
For #3, you can read the below source for instructions: https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/7.0/html/server_installation_and_configuration_guide/server_cache_configuration
8.3. Disabling Caching
To disable the realm or user cache, you must edit the keycloak-server.json file in your distribution. Where this file lives depends on your operating mode Here’s what the config looks like initially.
"userCache": {
"default" : {
"enabled": true
}
},
"realmCache": {
"default" : {
"enabled": true
}
},
To disable the cache set the enabled field to false for the cache you want to disable. You must reboot your server for this change to take effect.
8.4. Clearing Caches at Runtime
To clear the realm or user cache, go to the Red Hat Single Sign-On admin console Realm Settings→Cache Config page. On this page you can clear the realm cache or the user cache. This will clear the caches for all realms and not only the selected realm.

Netlify returning 404 when connecting to Atlas MongoDB

I'm trying to make a Netlify app that posts data to an Atlas MongoDB, and while I can post to the DB when I run my page from localhost, Netlify is returning a 404 whenever I attempt to post data to the DB. I know it is not an issue with Atlas's whitelisted IP addresses because I have whitelisted all IP addresses for the time being. I suspect that this has something to do with Netlify not properly reading or running the env.process that I'm using to store my Atlas information, although I am not completely certain that is the cause. When I run it locally, I have my config set up to simply use the Atlas information directly rather than relying on a .env file. I'm using mongoose to connect to the DB, and the connection portion of my code is the following in my production build:
mongoose.connect(process.env.MONGODB_URI || "mongodb://localhost/dbname");
This has not been working, but on the working copy that I run from localhost, I use:
const uri = `mongodb://atlasDB:<PASSWORDHERE>#atlasDB-shard-00-00-ot2tv.mongodb.net:27017,atlasDB-shard-00-01-ot2tv.mongodb.net:27017,atlasDB-shard-00-02-ot2tv.mongodb.net:27017/test?ssl=true&replicaSet=atlasDB-shard-0&authSource=admin&retryWrites=true`;
mongoose.connect(uri);
I have configured Netlify to have a MONGODB_URI build environmental variable of mongodb://atlasDB:<PASSWORDHERE>#atlasDB-shard-00-00-ot2tv.mongodb.net:27017,atlasDB-shard-00-01-ot2tv.mongodb.net:27017,atlasDB-shard-00-02-ot2tv.mongodb.net:27017/test?ssl=true&replicaSet=atlasDB-shard-0&authSource=admin&retryWrites=true
I have replaced PASSWORDHERE with the actual password in both instances, but the Netlify build environmental variable does not feature string quotations around the value when viewed in the entry field on the Netlify website. I tried putting them in, but it seemed to make no difference, but I may have simply not waiting long enough for the change to take effect.
Aside from Mongoose, I am not running any other dependencies that should have any effect on this problem. The project deadline is in a couple days, so any help with this would be greatly appreciated.