Change hour mongo [with sails] - mongodb

I have got mongo allocated in Mongolab and when I save some registers in the database, I am sending a specific date, in database it is saved with 4 hours more.
How I can change this timezone?? I am using Sails

Related

Why is my mongo collection being wiped on azure ubuntu instance?

I'm using azure ubuntu instance to store some data every minute in a mongo database. I noticed that the data is being wiped approximately once a day. I'm wondering why my data is being wiped?
I have a log every minute that shows a count of the db. Here are two consecutive minutes that show all records are deleted
**************************************
update at utc: 2022-08-06 10:19:02.393351 local: 2022-08-06 20:19:02.393366
count after insert = 1745
**************************************
update at utc: 2022-08-06 10:20:01.643487 local: 2022-08-06 20:20:01.643544
count after insert = 1
**************************************
You can see the data is wiped as count after insert goes from 1745 to 1. My question is why is my data being wiped?
Short Answer
Data was being deleted in a ransom attack. I wasn't using a mongo password as originally I was only testing mongo locally. Then when I set the bindIp to 0.0.0.0 for remote access, it meant anyone can access if they guess the host (this is pretty dumb of me).
Always secure the server with a password especially if your bindIp is 0.0.0.0. For instructions see https://www.mongodb.com/features/mongodb-authentication
More Detail
To check if you have been ransom attacked, look for a ransom note. An extra database may appear see show dbs in my case the new db with ransom note was called "READ__ME_TO_RECOVER_YOUR_DATA"
All your data is a backed up. You must pay 0.05 BTC to 1Kz6v4B5CawcnL8jrUvHsvzQv5Yq4fbsSv 48 hours for recover it. After 48 hours expiration we will leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com or https://buy.moonpay.io/ After paying write to me in the mail with your DB IP: rambler+1c6l#onionmail.org and/or mariadb#mailnesia.com and you will receive a link to download your database dump.
Another way to check for suspicious activity is in Mongodb service logs in /var/log/mongodb/mongod.log. For other systems the filename might be mongodb.log. For me there are a series of commands around the attack time in the log, the first of which reads:
{"t":{"$date":"2022-08-07T09:54:37.779+00:00"},"s":"I", "c":"COMMAND", "id":20337, "ctx":"conn30393","msg":"dropDatabase - starting","attr":
{"db":"READ__ME_TO_RECOVER_YOUR_DATA"}}
the command drops the database or starts dropping the db. As suspected there are no commands to read any data which means the attacker isn't backing up as they claim. Unfortunately someone actually payed this scammer earlier this month. https://www.blockchain.com/btc/tx/65d035ca4db759a73bd9cb68610e04742ffe0e0b71ecdf88f54c7e464ee80a51

Migrate Data from MongoDB to Postgres

We have database in mongo for a long time and now we have decided to move to Postgres. Since these two are totally different we have started with table design and API migration first. Now it comes to the data part.
In mongo, we have the following schemas and we wanted to migrate the same data to Postgres. I have gone through a couple of articles that say you can export data from mongo in CSV and import in Postgres using COPY command or using pgAdmin.
Mongos used uuid which is basically a string but in postgres we have id as an integer. we have used crossed ref foreign key in mongo as well how we can migrate those without lossing the connection between tabeles ?
can anyone suggest any good method ?

Mongo find out connection read preference

I have a node.js app using Mongoose (v5.11.17) drivers. Before I used standalone db, but recently for reliability concerns the app was migrated to replicas set (of 3). There is a code that saves some object a in the db, and then could be any number of calls reading object a. Since migrating to the replica set some times reading can return as not found, that has never happened with the standalone db. My guess is the read preferences are not right somehow and the read goes to a non-primary db (or may be the write went to non-primary). Can I find out from the connection in mongo shell and query its read preference?

MongoDB data change delay in Meteor

I would like to ask you about why my external instance MongoDB is slower than launched by Meteor.js.
I set the MONGO_URL environment variable to connect with my local database so the connection should be as fast as the database created by the Meteor.js.
However, when I tried to test publications with external database and I saw that I have one or two seconds latency, but when Meteor.js runs database all works properly (I saw the new data from database without delay).
Thanks for any help!
Cheers
Meteor has two ways to access changes in MongoDB:
Pull: Meteor checks for updates at regular intervals. You may notice a few seconds delay.
Push, also known as "oplog tailing": MongoDB sends data changes right when they are performed. Meteor registers it instantaneously.
You'll need to set the MONGO_OPLOG_URL environment variable to enable oplog tailing and have instantaneous updates. When Meteor starts up a local Mongo instance, is also sets up oplog tailing automatically.
Here's a detailed article about it: https://meteorhacks.com/mongodb-oplog-and-meteor/

Meteor app as front end to externally updated mongo database

I'm trying to set up an app that will act as a front end to an externally updated mongo database. The data will be pushed into the database by another process.
I so far have the app connecting to the external mongo instance and pulling data out with on issues, but its not reactive (not seeing any of the new data going into the mongo database).
I've done some digging and it so far can only find that I might need to set up replica sets and use oplog, is there a way to do this without going to replica sets (or is that the best way anyway)?
The code so far is really simple, a single collection, a single publication (pulling out the last 10 records from the database) and a single template just displaying that data.
No deps that I've written (not sure if that's what I'm missing).
Thanks.
Any reason not to use Oplog? For what I've read it is the recommended approach even if your DB isn't updated by an external process, and a must if it does.
Nevertheless, without Oplog your app should see the changes on the DB made by the external process anyway. It should take longer (up to 10 seconds), but it should update.