I have MongoDB replica set (3 servers) running on AWS. Every 6 hours I take a snapshot of the volume.
I have the need to get off Amazon Linux 1 and move to Ubuntu Linux due to some package changes. I have turned off the old cloudformation stack, created a new cloudformation stack using Ubuntu and now everything is blank.
So now, I take my snapshot, create a volume, remove the volume currently on the EC2, attach the new volume with old data, run sudo chown -R mongodb:mongodb /data and sudo chmod -R go+w /data and then restart Mongodb sudo systemctl restart mongod
This does start, but in the mongo log, I get this issue:
"msg":"Heartbeat failed after max retries","attr":{"target":"db2.ospreynrglive.com:27017","maxHeartbeatRetries":2,"error":{"code":93,"codeName":"InvalidReplicaSetConfig","errmsg":"replica set IDs do not match, ours: 604d72257a07ddd82163de70; remote node's: 604e566ee4290c91c30c91c3ff6504"}}}
How can I fix this? I try to edit local.system.replset and that doesn't work. Tried to specify the replicaSetId in rs initialize which failed and also rs.reconfig won't let me change it either.
I know that I can do a mongodump and mongorestore but this is extremely slow when having lots of data. Can I not use the data/volume/backup/snapshot from AWS? What would happen if all 3 servers went down at the same time on AWS? I would be in the same situation!
Anyone looking to clear an existing Replica Set needs to do the following steps:
Start mongod without --replSet flag
Run this on mongo shell
use admin
db.grantRolesToUser("admin", [{ role: "__system", db: "admin" }])
use local
db.dropDatabase()
use admin
db.revokeRolesFromUser("admin", [{ role: "__system", db: "admin" }])
Then restart mongod with --replSet flag and it will be like no RS was ever initiated
From here, you initiate it on the primary and then do the same with the other nodes and add each one.
Related
When building a docker image, I'm dumping an external db on a directory inside my container using mongodump, then I start a mongodb and execute a restore (mongorestore). Everything seems to works fine when seeing the log. My goal is to deliver docker image containging data in the database, that could be launched with prepopulated data. This way my end user can "play" with it along with a web app. Notice that I am wondering if the web app could update this data .. Losing the data after the container is stopped is not a problem in my use case.
Here the complete dockerfile
FROM mongo
RUN mongodump --verbose --uri="mongodb://....." --out=./izidump2
RUN mongod --fork --dbpath /data/db --logpath /var/log/mongodb.log; \
mongorestore --uri="mongodb://localhost:27017/mydb" --verbose ./izidump2; \
mongo --eval "db.createUser({ user: 'root', pwd: 'root', roles: ['readWrite', 'dbAdmin'] })" mydb;
Unfortunately when running the container the db does not exist..
When connecting to an instance of the running container and go to /data/db , the directory is empty . Very strange since i have restored the data inside this directory and i did not get any complaint while running the mongo restore.
What did I miss ?
As I created replicaSet outside of /data/db folder by mistake, so I would like to set path to folder which is not root where I created replica set, instead of starting again.
I tried this(inside mongodb-instance folder is 3 replica folders):
"D:\MongoDB\Server\3.4\bin\mongod.exe" --dbpath D:\mongodb-instance
then I try to run It:
mongod --replSet "rs0"
but still got the same problem:
Data directory C:\data\db\ not found.
For replication, you need to setup multiple servers. Then each of them will have their own data folders.
Then start the servers using command
mongod --host <hostname> --port <portNum> --replSet "nameOfReplicaSet"
Once all the servers are running you'll have to initiate the replication by connecting to any one of them and execute the rs.initiate(replConfig) command with whatever configuration you need.
You can refer to the docs here.
I'm looking for a little direction on how to set up services on AWS. I have an application that is build with Node.js and uses mongodb (and mongoose as the ODM). I'm porting everything over to AWS and would like to set up an autoscaling group behind a load balancer. What I am not really understanding, however, is where my mongodb instance should live. I know that using DynamoDB it can be fairly intuitive to set up to work with that, but since I am not, my question is this: Where and how should mongo be set up to work with my app? Should it be on the same ec2 instance with my app, and if so, how does that work with new instances starting and being terminated? Should I set up an instance dedicated only for mongo? In addition, to that question, how do I create snapshots and backups of my data?
This is a good document for installing MongoDB on EC2, and managing backups: https://docs.mongodb.org/ecosystem/platforms/amazon-ec2/
If you aren't comfortable doing all this yourself you might want to also look into MongoLab which is a MongoDB as a Service that can run on AWS.
Your database should definitely be in a separate instance than your app, from all aspects.
A very basic tier based application should comprise of the app server cluster in a scaling group behind a load balancer - in a public subnet, and a separate cluster (recommended in a different subnet which is not publicly accessible), which your app cluster will speak to. whether to use an ELB for Mongo or not actually depends on your mongo config (replica set).
In regards to snapshots (assume this will only be relevant for your DB), have a look at this.
You can easily install MongoDB in AWS Cloud 9 by using the below process
First create Cloud 9 environment in AWS then at the terminal
ubuntu:~/environment $ At the terminal you’ll see this.
Enter touch mongodb-org-3.6.repo into the terminal
Now open the mongodb-org-3.6.repo file in your code editor (select it from the left-hand file menu) and paste the following into it then save the file:
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
* Now run the following in your terminal:
sudo mv mongodb-org-3.6.repo /etc/yum.repos.d
sudo yum install -y mongodb-org
If the second code does not work try:
sudo apt install mongodb-clients
Close the mongodb-org-3.6.repo file and press Close tab when prompted
Change directories back into root ~ by entering cd into the terminal then enter the following commands:
“ubuntu:~ $ “ - Terminal should look like this.
sudo mkdir -p /data/db
echo 'mongod --dbpath=data --nojournal' > mongod
chmod a+x mongod
Now test mongod with ./mongod
Remember, you must first enter cd to change directories into root ~ before running ./mongod
Don't forget to shut down ./mongod with ctrl + c each time you're done working
-if this error pops up while using command mongod
exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
Then use the code:
sudo chmod -R go+w /data/db
Reference
I'm trying to store data in a mongoDB database on Amazon EC2. I'm using starcluster to configure and start the EC2 instance. I have an EBS volume mounted at /root/data. I installed mongoDB following the instructions here. When I log in to the EC2 instance I am able to type mongo, which brings me to the mongo shell with the test database. I have then added some data to a database, let's say database1, with various collections in it. When I exit the EC2 instance and terminate it, using starcluster terminate mycluster, and then create a new, different instance, the database1 data is no longer shown in the mongo shell.
I have tried changing the dbpath in the /etc/mongodb.conf file to /root/data/mongodb, which is the EBS volume, and then start and stop the mongodb service using sudo service mongodb stop and sudo service mongodb start. I then try mongo again and receive
MongoDB shell version: 2.2.2
connecting to: test
Sat Jan 19 21:27:42 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91
exception: connect failed
An additional issue is that whenever I terminate the EC2 instance any changes I made to the config file disappear.
So my basic question is: how do I change where mongoDB stores its data on EC2 so that the data will remain when I terminate one EC2 instance and then start another EC2 instance.
Edit:
In response to the first answer:
The directory does exist
I changed the owner to mongodb
I then issued the command sudo service mongodb stop
Checked to see if the port is released using netstat -anp | grep 27017. There was no output.
Restarted mongodb using sudo service mongodb start
Checked for port 27017 again and receive no output.
Tried to connect to the mongo shell and received the same error message.
Changed the mongodb.conf back to the original settings, restarted mongodb as in the above steps, and tried to connect again. Same error.
The EBS volume is configured in the starcluster config to be reattached on each startup.
For the "connect failed" after you change /etc/mongodb.conf problem, you can check the log file specified in the /etc/mongodb.conf (probably at /var/log/mongodb/mongodb.log):
Check that the directory specified by dbpath exists.
Make sure it is writable by the "mongodb" user. Perhaps it's best to chown to mongodb.
Make sure mongod actually released the 27017 port before starting it using: netstat -anp | grep 27017
Wait a couple seconds for mongod to restart before launching mongo.
It's not clear from your question if you are using Starcluster EBS volumes for Persistent Storage. Note that Ordinary EBS volumes do not automatically persist and reattach when you terminate an instance and start another. You would need to attach and mount them manually.
Once you get that working you'll probably want to create a custom Starcluster AMI with mongo properly installed and /etc/mongodb.conf appropriately modified.
UPDATE: this was fixed after Meteor v0.4 (2012). For historical purposes:
Excerpt from du:
2890768 ./Code/Meteor/QuarterTo/.meteor/local/db/journal
2890772 ./Code/Meteor/QuarterTo/.meteor/local/db
2890776 ./Code/Meteor/QuarterTo/.meteor/local
2890788 ./Code/Meteor/QuarterTo/.meteor
2890804 ./Code/Meteor/QuarterTo
I merely ask because it was in my Dropbox and pushed me over my limit.
When meteor run is executed, it starts mongodb with default mongo settings, so it creates (massive) prealloc files in .meteor/local/db/journal.
There is no obvious way to disable this behavior. What I have done as a workaround is change the file app/lib/mongo_runner.js and add a --nojournal parameter that gets passed to mongodb at startup.
I created an issue for this: https://github.com/meteor/meteor/issues/15
Maybe you can use smallfiles=true parameter for mongoDB? It will create smallest prealloc files
You can turn off preallocation by passing the --noprealloc arg to mongod. The downside is that there will be pauses each time a new storage file needs to be allocated. Depending on the filesystem you are using (e.g., ext3 vs. ext4), this could result in noticeable latency for a user.
The commands that work for me are:
stop mongodb instance if it is running
sudo service mongod stop
create new mongodb instance without requiring 3+GB preallocated space and use smallfiles.
mongod --noprealloc --smallfiles
If you are getting “ERROR: dbpath (/data/db) does not exist.” when running 2,
then run these commands before 2.
sudo mkdir -p /data/db/
sudo chown `id -u` /data/db