I am using Orion and Mongo with Docker, installed as the Fastest Way section of the documentation. All of them are in the same server.
I am able to connect them, and deal with entities and subscriptions (create, update, delete working fine), using volume and persisting my data even after rebooting everything. The annoying part is that Orion continuously send the error message:
mongoConnectionPool.cpp[194]: Database Error (connection failed, after
100 retries: 'couldn't connect to server localhost:27017 (127.0.0.1)
failed, connection attempt failed'
Why does Orion give this message if he actually do connect and update information in Mongo? What this message imply and how can I remove it?
Notes:
contextBroker --version: 0.26.1
Docker version 1.10.3, build 20f81dd
mongod --version: db version v2.6.11
#Cortwave pointed me out to the solution of this issue.
I do have a link to orion and mongo in my docker-compose.yml file. It's a line under the orion's section:
orion:
command: -dbhost mongo
But when I stop only the orion container docker stop orion, and start it again docker start orion, the link is missed.
To fix this, I can stop and start both the containers with docker-compose stop/start or, when I stop only orion, I can insert the db information with docker start orion -dbhost mongo when start it.
Related
I try to run a Seyren instance locally, and I would like to do it using the dockerized MongoDB image.
After I pulled the latest docker image, I run it and expose the 27017 port:
docker run --name mongodb -v /data/db:/data/db -p 27017:27017 -d mongo
Next thing to do is compile the seyren jar file and passing it some variables. docker.local is mapped to the IP of the docker toolbox in /etc/hosts
java -jar seyren-1.3.0.jar GRAPHITE_URL=https://graphiteurl.io MONGO_URL=mongodb://docker.local:27017
But I then got the following errors:
30/03/2016 13:58:02.643 [localhost-startStop-1]
INFO com.seyren.mongo.MongoStore - Ensuring that we have all the indices we need 30/03/2016 13:58:12.661 [localhost-startStop-1]
ERROR
com.seyren.mongo.MongoStore - Failure while bootstrapping Mongo
indexes. If you've hit this problem it's possible that you have two
checks which are named the same and violate an index which we've tried
to add. Please correct the problem by removing the clash. If it's
something else, please let us know on Github!
com.mongodb.MongoTimeoutException: Timed out after 10000 ms while
waiting for a server that matches AnyServerSelector{}. Client view of
cluster state is {type=Unknown, servers=[{address=localhost:27017,
type=Unknown, state=Connecting,
exception={com.mongodb.MongoException$Network: Exception opening the
socket}, caused by {java.net.ConnectException: Connection refused}}]
What do I do I miss here?
EDIT:
The thing when I compile the seyren jar file. I indeed have a seyren database which is created in my mongo instance ... So there must be a connection established.
As I understood you are using docker toolbox on Mac. Docker toolbox is running not in your localhost (it's running in instance of VBox virtual machine). And you should to use the IP of this machine instead of localhost. You can get it using docker-machine env command in terminal. IP of DOCKER_HOST env variable will be the IP of your mongoDB instance host.
Found the solution. I had to use mongo:2.7 image since this is the only workable one.
I am using docker-compose to run Orion+Mongo.
Then, I am starting accumulator-server with:
drasko#Lenin:~/fiware/fiware-orion/scripts$ ./accumulator-server.py 1028 /accumulate on
verbose mode is on
* Running on http://0.0.0.0:1028/ (Press CTRL+C to quit)
However, running a gives an error:
orion_1 | WARNING#21:27:21 httpRequestSend.cpp[438]: Notification failure for localhost:1028 (curl_easy_perform failed: Couldn't connect to server)
Could it be due the fact that Orion is run in the Docker and how to solve this problem?
Your problem is that your Orion host cannot reach the accumulator.
In the subscription you are using a reference field with the value "localhost", so Orion is looking in its localhost relative to the docker container.
To solve it, you should either run the accumulator inside the docker image or make Orion able to contact the accumulator in some other way than "localhost" (maybe run accumulator in a different container and link it using docker compose).
Just finished installing mongodb, however, I have not been able to make complete sense of the difference between mongo vs mongod commands. Yes, I do understand that
mongod is the primary daemon process for the MongoDB system
and that
mongo is an interactive JavaScript shell interface to MongoDB
but what does that mean practically? I presume every time I want to use mongodb, I need to run mongod first. But then why am I able to run mongo without having started mongod first? Does mongo run mongod in the background automatically? Secondly, if I run mongod it eventually ends with something like
waiting for connections on port 27017
but then I can't type anything after that. Again, I presume that mongodb has been started in the background so I can safely close the terminal. But if I close the terminal by mistake (on a mac), how can I get that back up on the terminal? Also, how can I terminate the service for it to stop listening to the port?
So as you can I see, I have a bunch of simple questions... but most are related to the practical uses of when and when not to mongo or mongod. I can't seem to find anything online that will help me explain these in the practical sense.
As with most database software, Mongo is split into a server and client. The server is the main database component which stores and manages data. Clients come in various flavours and connect to the server to insert or query data.
mongod is the server portion. You start it, it runs, end of story.
mongo is a default command line client. You start it, you connect to a server, you enter commands, you quit it.
You have to run mongod first, otherwise you have no database to interact with. Simply running mongod on a command line will make it the frontmost running application, and it does not offer any interactivity. So yes, you'll just see something like "Waiting for connections...", and nothing more. You typically don't run mongod like that on the command line. You most typically create an init.d script or launchd file or however you manage your daemons, and have the system start it automatically at system boot time.
If you want to launch mongod as a one-off thing without having it permanently running on your systems, put it in the background:
$ mongod &
The & puts it in the background and you can continue to use your command line. You can see it and kill it like this:
~ deceze$ mongod &
[1] 1065
~ deceze$ jobs
[1]+ Running mongod &
~ deceze$ kill %1
[1]+ Done mongod
Once your server is running, start mongo, connect to the server, and interact with it. If you try to run mongo without a running server, it should complain that it's not able to connect:
~ deceze$ mongo
MongoDB shell version: 3.0.2
connecting to: test
2015-08-13T09:36:13.518+0200 W NETWORK Failed to connect to 127.0.0.1:27017, reason: errno:61 Connection refused
2015-08-13T09:36:13.521+0200 E QUERY Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed
at connect (src/mongo/shell/mongo.js:179:14)
at (connect):1:6 at src/mongo/shell/mongo.js:179
exception: connect failed
If your mongo shell does connect to something, you might unknowingly have another instance of mongod running on your system.
With mongodyou are starting the server on your machine. As you stated correctly, mongo is your client, your user interface, if you want to. Per default it connects to your local instance of MongoDB. If you start your client without a server instance running, you would have to 'tell' it, where it should connect to (e.g. a remote instance):
http://docs.mongodb.org/manual/reference/program/mongo/
I recently started the mongodb service on my windows machine, and it is running succesfully. Or at least I think it is, though I am not 100% sure and I don't know what port it is running on because all attempts to check the status have failed. When I try to run mongo.exe, I get the following error:
paul#PAUL_LAPTOP /c/program files/mongodb 2.6 standard/bin
$ mongo
MongoDB shell version: 2.6.3
connecting to: test
2014-08-11T03:36:15.802-0400 warning: Failed to connect to 127.0.0.1:27017, reas
on: errno:10061 No connection could be made because the target machine actively
refused it.
2014-08-11T03:36:15.808-0400 Error: couldn't connect to server 127.0.0.1:27017 (
127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
Any ideas how I could check why this is happeing? One good first step would be checking the status of my mongo service, which I am not sure how to do.
Any help is greatly appreciated.
Thanks,
Paul
Have you gonbe through all the steps here: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows/?
You need to run the mongod command to start mongodb, and you also have to define the path where mongodb shall store your database:
C:\mongodb\bin\mongod.exe --dbpath d:\test\mongodb\data
When you typed in the mongod command, did you also give it a path? This is the step beforehand and this is usually the issue.
mongod --dbpath="put your path to where you want it to save the working area for your database here!! without these silly quotations marks I may also add!"
example: mongod --dbpath=C:/Users/Kyle-3/Desktop/DEV/dangerzonearea/test/mongodb
That is my path and don't forget if on windows to flip the slashes forward if you copied it or it won't work!
I'm trying to store data in a mongoDB database on Amazon EC2. I'm using starcluster to configure and start the EC2 instance. I have an EBS volume mounted at /root/data. I installed mongoDB following the instructions here. When I log in to the EC2 instance I am able to type mongo, which brings me to the mongo shell with the test database. I have then added some data to a database, let's say database1, with various collections in it. When I exit the EC2 instance and terminate it, using starcluster terminate mycluster, and then create a new, different instance, the database1 data is no longer shown in the mongo shell.
I have tried changing the dbpath in the /etc/mongodb.conf file to /root/data/mongodb, which is the EBS volume, and then start and stop the mongodb service using sudo service mongodb stop and sudo service mongodb start. I then try mongo again and receive
MongoDB shell version: 2.2.2
connecting to: test
Sat Jan 19 21:27:42 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91
exception: connect failed
An additional issue is that whenever I terminate the EC2 instance any changes I made to the config file disappear.
So my basic question is: how do I change where mongoDB stores its data on EC2 so that the data will remain when I terminate one EC2 instance and then start another EC2 instance.
Edit:
In response to the first answer:
The directory does exist
I changed the owner to mongodb
I then issued the command sudo service mongodb stop
Checked to see if the port is released using netstat -anp | grep 27017. There was no output.
Restarted mongodb using sudo service mongodb start
Checked for port 27017 again and receive no output.
Tried to connect to the mongo shell and received the same error message.
Changed the mongodb.conf back to the original settings, restarted mongodb as in the above steps, and tried to connect again. Same error.
The EBS volume is configured in the starcluster config to be reattached on each startup.
For the "connect failed" after you change /etc/mongodb.conf problem, you can check the log file specified in the /etc/mongodb.conf (probably at /var/log/mongodb/mongodb.log):
Check that the directory specified by dbpath exists.
Make sure it is writable by the "mongodb" user. Perhaps it's best to chown to mongodb.
Make sure mongod actually released the 27017 port before starting it using: netstat -anp | grep 27017
Wait a couple seconds for mongod to restart before launching mongo.
It's not clear from your question if you are using Starcluster EBS volumes for Persistent Storage. Note that Ordinary EBS volumes do not automatically persist and reattach when you terminate an instance and start another. You would need to attach and mount them manually.
Once you get that working you'll probably want to create a custom Starcluster AMI with mongo properly installed and /etc/mongodb.conf appropriately modified.