Authentication Fail with MongoDB Compass Community - mongodb

I've just created a new MongoDB account and I'm now trying to connect the free cluster I created via MongoDB Compass Community application but I'm getting a 'Authentication Fail' error being displayed.
This is what I've checked so far:
From my MongoDB Clusters section when I clicked on the Connect (…) button which then gives you various options. From there, I selected 'Connect with MongoDB Compass' and copied the connection string.
This was detected as expected by the Compass and the information was filled automatically in all the relevant fields and I also filled the password by copy/pasting it into the relevant field. 100% sure it is correct.
I checked that the username used was indeed set up as an admin and it is.
I checked my Authentication database was correct and it is.
I've checked that my public IP was added to the whitelist and it is. The only thing I've noticed is that when I added my public IP address, it added a /32 at the end. Is that the port?
But I'm not quite sure what else to test for to resolve this problem.
Any suggestions?
Thanks.

I eventually found out what the problem was after speaking to someone from MongoDB support Team!
Everything was done correctly except for one thing. I was being impatient after changing my Cluster User's password. It can take up to 2 minutes for the system to be updated and therefore to allow Compass to access it.
Once I waited a couple of minutes, I was able to login as expected in Compass.
I still can't quite believe I wasted so much time on such a simple issue but the main thing is that it is resolved.
I did send them some feedback as a lot of things could have been done a lot better:
Highlight it better in their documentation i.e. red??
Make the "warning" message displayed on the webpage after updating the user details more obvious. It was right in my face and never spotted it appear or disappear as once I'd update the user detail on the website, I'd swap immediately to Compass to try to login. By the time, I'd be done, well over 2 minutes would elapsed and the message would be long gone, so not very useful the way it is currently done.
Instead of just saying: 'Authentication Fail', which is correct, the message could read differently when it knows the user is being updated i.e. 'Authentication Fail - Please try again in a few minutes as we're updating this user's details'... Something like this anyway.
So, remember to be patient when changing your user's details in MongoDB and if you are, then yes, you will have a database up and running in the cloud in 5 minutes or less! :)

Related

Unable to open Google Cloud DNS page

I am unable to access Google Cloud DNS page.
All it shows is:
"DNS API is being enabled. This may take a minute or more."
Then it reloads and repeats showing the same message.
The API is already enabled, and the records I created works. No problem with DNS.
I need to modify records, but I can't because of this problem.
I tried opening the page in different computers and different browsers without addons, same result.
If there is a better place to ask, please do tell.
Thank you.
You should be able to access the page regardless of what computer / browser you're using.
If you cannot it's either a temporary outage which you can check here or a bug.
The only thing to do here is to contact paid support for more immediate help and if the time is something you can afford report this at Google's IssueTracker and get help for free - however it may take a few days. It is possible that only you are affected. Please describe the issue in as much detail as possible - this will expidete the process.

PostgreSQL only clear user password

I am having a big problem, quite difficult to find/search.
I have a server in Ubuntu, where inside that server I have installed:
GITLAB (have all proyect)
POSTGRESSQL (Independent gitlab database is used for a personal project)
TOMCAT with APP WEB (Springboot, this use postgres)
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
I am having various problems:
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
Very frequently, almost every day, the user postgres from the postgresql server "erases" the password. Without anyone doing it manually, "it happens exponentially". I notice why the application stops responding, and then I access postgresql and note that the postgres user has no password.
I looked for many places, and I can't find anything. I really don't know where else to look. If someone passed it to you or has information about it, I would be grateful if you could provide it to me.
------More information added----------
I was looking at the postgres logs, before I have no authentication and I see this.
There are times when no one could have been using the springboot server,
--2020-01-17 00:30:21.286
And also the two log that show before that moment. Could it be something that is deleting my password?
Thank you.
PostgreSQL does not randomly delete its own passwords, and I really doubt Tomcat or Gitlab do either. Indeed they shouldn't even have access to the server as the 'postgres' user or any other superuser, and so shouldn't be able to even if they wanted.
It seems like that there is an intruder in your system. After gaining access they create their own user with their own password. Then disabling your normal superuser from logging on is a common way to try to prevent you from regaining control and kicking them out. Do any users exist that you do not recognize?
The bit of the log file you posted clearly shows someone trying to guess your password, starting at 2:58. You aren't logging IP addresses (%h) so it doesn't show where they are coming from. It doesn't show that they succeed, but unless you have log_connections = on, it wouldn't show successes.

G Suite Email Migration Does Not Complete, Stuck on 99%

I'm currently experiencing something rather weird: while migrating emails from a GoDaddy email server to a new G Suite set up for a number of users, I was able to successfully move for a couple of emails, as confirmed by Google's 'Complete' tick beside them. I was able to observe the migration too as it went on.
However, for one of the emails, the number of emails read just seems to keep increasing, and it still hasn't displayed 'Complete', but remains stuck on '99%'.
See screenshots I literally took just now below: as of the first latest screenshot, it says 'Successfully migrated 3230 emails', while stuck on 99%:
Then I hit refresh, check the status of that same account, and now it says '...3250 emails', while still stuck on 99%:
This isn't how it's supposed to behave, at least that isn't the behaviour I experienced with the previous 4 emails in that list. Ideally, it should say 'Migrating X out of fixed_amount emails'. In this case, that fixed_amount was
about 2,000 emails. It has now since passed that figure, but instead of showing 'Complete', it instead shows 'Successfully migrated new_amount' where new_amount keeps increasing.
This has been ongoing for almost 24 hours now. Honestly, I don't know if this is a bug or not. I really just need some helpful info to know if I should be concerned or not, perhaps maybe if someone else has run into this. Anyone?
Stumbled on to Google's documentation: https://support.google.com/a/answer/7032598?hl=en
To quote the 'Why does my migration look like it's stuck at 99%?' section:
You’ll see 99% when all email is migrated. After everything is
migrated, the data migration service applies any labels to the
migrated email, which can take time. When the labels are applied, you
should see that the migration is complete (100%).
You might also see this issue if the estimated number of emails to
migrate exceeds the actual number of messages. The migration will
report 99% until the migration completes. This process might take some
time.
You shouldn't be concerned. I was migrating around 29.000 emails from a personal gmail to Google Workspace gmail and the migration took 4 days (migrating only one user), from which the last 1.5 days the migration was "stuck" at 99%. No need to restart the migration, eventually it indeed finishes. I also got several error codes (e.g. 17009 - 'Generating an access token with the supplied credentials was unsuccessful...'), but none proved valid, I haven't actioned on them as, like in your case, I saw the number of migrated emails increasing.

Change value after period of time in Firebase

I have a lobby in which I want the users to be in sync. So when a user turns off his internet while the app is running, he should be removed. I know Firebase does not support server side coding, so the coding needs to be client side. The answers from How to delete firebase data after "n" days and Delete firebase data older than 2 hours do not answer this question since they expect that the user is online and they have an internet connection. So my question is if is possible to delete users when they got no internet? I thought maybe it is an idea to let the users update a value every 5 seconds, and when that update is not done, the other users in that lobby remove the player. This way is not good, since every player needs to retrieve and upload alot of data every 5 seconds. What is the best way to solve this?
Edit: to make it short, lets say each user has an image. The image should be green when the user is connected, and grey when disconnected.
Edit 2: after thinking it over, it is really hard to accurate present the connected users on a client-side server. That is why, if nobody has a different solution, I should add another server which can execute server-side codes. Because of the larges amount of servers, I would like to know which server I should use. The server should run a simple function which only checks if the users are connected or disconnected and can communicate with Firebase. If I am correct it should look like this:
But the server also needs to communicate with the users directly. I have absoluty no idea where to start.
If I'm not completely wrong, you should be able to use onDisconnect.
From the Firebase, documentation:
How onDisconnect:Works:
When an onDisconnect() operation is established, it lives on the Firebase Realtime Database server. The server checks security to make sure the user can perform the write event requested, and informs the client if it is invalid. The server then monitors the connection. If at any point it times out, or is actively closed by the client, the server checks security a second time (to make sure the operation is still valid) and then invokes the event.
In app in production I'm using onDisconnectRemoveValue, and when I close the app, the user removes himself from the lobby. Not sure how it works when you turn the device in airplane mode, but from the documentation it seems there should be no problem.
One thing: when you test it better do it on real device, the simulator have issues with turning it off and on, at least the on I have installed.
Edit: So i checked the onDisconnect when you put the device on airplane mode and it works! The question is, that it removes the user in about a 1:30 min, approximately, so if you read the documentation or ask the support, you may be (and only may be) able to find a way to set the time you want.

Why is my mongodb collection deleted automatically?

I have a MongoDB client in three EC2 instances and I have created a replica set. Last time I had a problem, of space constraint which stopped my mongod process, thereby halting the application and now in an instance couple of days back, some of my tables were gone from database, so I set logging and all to my database just to catch if anything like that happens again. In a fresh incident this morning I was unable to login to my system and that's when I found out that whole database was empty. I checked other SO question like this which suggest setting up a TTL.Which I haven't done at all.
Now how do I debug this situation and do a proper root cause analysis? I can't even find anything in my debug logs as well. The tables just vanished. How do I set up proper logging mechanism and how do I ensure that all my tables are never ever deleted again?
Today I got a mail from Amazon that I was probably running an unsecured version of MongoDB and that may have caused this issue. So who ever is facing this issue please go through the Security Checklist Provided by MongoDB. There are some points that are absolutely necessary in there.
1. Enable Access Control and Enforce Authentication
2. Encrypt Communication
3. Limit Network Exposure
These three are the core and depending upon how many people access your database you can Configure Role-Based Access Control.
These are all the things I have done. Before this incident I had not taken security that seriously but after I was hit by it. I made sure I have all the necessary precautions in place.
Hope this helps someone.