MongoDB - Show Dbs - mongodb

I understood mongodb show dbslist only the dbs if they are not
empty. Why this rationale and why its implemented that way to confuse
a typical db users who want to shift from other db world to this mongo
world. Why don't they simply show all the dbs.
Rephrasing:
Once I installed mongodb instance, I logged into mongo shell and issue the command show dbs but its not listing the default db i.e. test. Why?

MongoDB lazily evaluates databases. If every time a user said db =client["someDBName"] a full database instance sprang to life then there would be a lot of empty databases obscuring the ones with data in them. Instead the client and server delay creation of the database until a collection is created. In Python you can force database and collection creation by using the create_collection command as follows:
>>> import pymongo
>>> c=pymongo.MongoClient()
>>> c["dummydb"].create_collection("dummycol")
Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True), 'dummydb'), 'dummycol')
>>> c["dummydb"].list_collection_names()
['dummycol']
>>> c.list_database_names()
['admin', 'bonkers', 'census', 'config', 'dummydb', 'dw', 'local', 'logdb', 'test']
>>> import pymongo
>>> c.list_database_names()
['admin', 'census', 'config', 'dw', 'local', 'logdb', 'test']
>>> c=pymongo.MongoClient()
>>> c["dummydb"].create_collection("dummycol")
Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True), 'dummydb'), 'dummycol')
>>> c.list_database_names()
['admin', 'census', 'config', 'dummydb', 'dw', 'local', 'logdb', 'test']
>>>

Related

Combine MongoDb clients: pyMongo and MongoEngine

In my web application, I use Flask as framework and MongoDB as persistent layer. There are multiple libraries to connect to MongoDB. I am currently using the low-level lib pyMongo. However, I would like to combine it with MongoEngine for some models.
The only approach I see is to create an instance of both clients. This looks a big doggy. Is there a simpler way to combine these libraries (pyMongo, MongoEngine) such that they use the same database (with different collections).
Its currently not possible to use an existing Pymongo client to connect MongoEngine but you can do the opposite; if you connect MongoEngine, you can retrieve its underlying pymongo client or database instances.
from mongoengine import connect, get_db, Document, StringField
conn = connect() # connects to the default "test" database on localhost:27017
print(conn) # pymongo.MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, read_preference=Primary())
db = get_db() # pymongo.Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, read_preference=Primary()), u'test')
print(db)
class Person(Document):
name = StringField()
coll = Person._get_collection()
print(coll) # pymongo.Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, read_preference=Primary()), u'test'), u'person')

Flask-sqlalchemy losing connection after restarting of DB server

I use flask-sqlalchemy in my application. DB is postgresql 9.3.
I have simple init of db, model and view:
from config import *
from flask import Flask, request, render_template
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://%s:%s#%s/%s' % (DB_USER, DB_PASSWORD, HOST, DB_NAME)
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
login = db.Column(db.String(255), unique=True, index=True, nullable=False)
db.create_all()
db.session.commit()
#app.route('/users/')
def users():
users = User.query.all()
return '1'
And all works fine. But when happens DB server restarting (sudo service postgresql restart), on first request to the /users/ I obtain sqlalchemy.exc.OperationalError:
OperationalError: (psycopg2.OperationalError) terminating connection due to administrator command
SSL connection has been closed unexpectedly
[SQL: ....
Is there any way to renew connection inside view, or setup flask-sqlalchemy in another way for renew connection automatically?
UPDATE.
I ended up with using clear SQLAlchemy, declaring engine, metadata and db_session for every view, where I critically need it.
It is not solution of question, just a 'hack'.
So question is open. I am sure, It will be nice to find solution for this :)
The SQLAlchemy documentation explains that the default behaviour is to handle disconnects optimistically. Did you try another request - the connection should have re-established itself ? I've just tested this with a Flask/Postgres/Windows project and it works.
In a typical web application using an ORM Session, the above condition would correspond to a single request failing with a 500 error, then the web application continuing normally beyond that. Hence the approach is “optimistic” in that frequent database restarts are not anticipated.
If you want the connection state to be checked prior to a connection attempt you need to write code that handles disconnects pessimistically. The following example code is provided at the documentation:
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
Here's some screenshots of the event being caught in PyCharm's debugger:
Windows 7 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.11, Flask-SQLAlchemy 2.1 and psycopg 2.6.1)
On first db request
After db restart
Ubuntu 14.04 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.8, Flask-SQLAlchemy 2.0 and psycopg 2.5.5)
On first db request
After db restart
In plain SQLAlchemy you can add the pool_pre_ping=True kwarg when calling the create_engine function to fix this issue.
When using Flask-SQLAlchemy you can use the same argument, but you need to pass it as a dict in the engine_options kwarg:
app.db = SQLAlchemy(app, engine_options={"pool_pre_ping": True})

Does pymongo replica set client connection support auto fail over?

I created the following mongo replica sets by using mongo cli:
> config = { _id:"repset", members:[{_id:0,host:"192.168.0.1:27017"},{_id:1,host:"192.168.0.2:27017"},{_id:2,host:"192.168.0.3:27017"}]}
> rs.initiate(config);
All the mongo servers run properly.
>>> import pymongo
>>> from pymongo import MongoClient
>>> servers = ["192.168.0.1:27017", "192.168.0.2:27017", "192.168.0.3:27017"]
>>> MongoClient(servers)
>>> xc = MongoClient()
>>> print xc
MongoClient('localhost', 27017)
>>> print xc.database_names()
[u'test_repsets', u'local', u'admin', u'test']
After I kill the local mongodb server, it shows me connection timeout error:
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused
It seems there is no auto fail over, although I defined the mongodb servers.
I am wondering if pymongo handles fail over automatically, or how this situation is handled properly?
Thank you in advance.
in Pymongo 3.x you may want to explicitly state what replica set you are connecting to. I know Pymongo 3.x switched up some of the ways it handles being given an array of servers. I got this off the Pymongo API about connections to replicas and auto failover
In your code :
MongoClient(servers)
Above line is not assigned to any variable. It should assign to variable (in your case you again created instance which causes error.)
Please add following line
>>> #MongoClient(servers) # remove this line
>>> #xc = MongoClient() # remove this line
>>> xc = MongoClient(servers) # add this line

readAnyDatabase user can create a database on mongodb

the following code leaves an empty dummy database. Is this system behavior intended?
mongodb is running --auth mode and the user is part of the readAnyDatabase Role.
import pymongo
print CORE_PROD_URL
mongo = pymongo.MongoClient(CORE_PROD_URL)
print mongo.database_names()
print mongo.dummy.test.count()
print mongo.database_names()
which gives:
mongodb://read_only_user:pw#localhost:27017
[u'admin', u'local']
0
[u'admin', u'local', u'dummy']
the same behaviour happens with find()
while
mongo.dummy.test.insert({‘foo’: ‘bar’})
throws an exception
OperationFailure: not authorized on new_db to execute command
This is a known bug, SERVER-11051. The database name will disappear from "database_names()" the next time you restart the server, but of course it will reappear next time you read from the "dummy" database.

mongodb: calling end_request on a ReplicaSetConnection throws database error

I am using the new ReplicaSetConnection method for making a connection to my mongodb cluster. The change really comes down to replacing pymongo.Connection with pymongo.ReplicaSetConnection. I use the connection for my purposes and then I call end_request on the connection to make sure I flush the connection before I call disconnect() on the connection. This ensures that I dont have a large collection of half-connected sockets after a long run. This works great when I use Connection, but when I use ReplicaSetConnection pymongo complains that I'm trying to run end_request() on a database object despite the fact that i am most definitely calling this against the ReplicaSetConnection object. Is this something new in pymongo or is this an error in the driver? Below is a manual run through of the problem I'm experiencing.
>>> import pymongo
>>> s = pymongo.ReplicaSetConnection("192.168.1.1:27017, 192.168.1.2:27017", replicaSet='rep1', safe=True)
>>> s
ReplicaSetConnection([u'192.168.1.1:27017', u'192.168.1.2:27017'])
>>> s.read_preference = pymongo.ReadPreference.SECONDARY
>>> s
ReplicaSetConnection([u'192.168.1.1:27017', u'192.168.1.2:27017'])
>>> type(s)
<class 'pymongo.replica_set_connection.ReplicaSetConnection'>
>>> d = s['test']
>>> s.end_request()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "build/bdist.linux-x86_64/egg/pymongo/database.py", line 696, in __call__
TypeError: 'Database' object is not callable. If you meant to call the 'end_request' method on a 'Connection' object it is failing because no such method exists.
>>> s.disconnect()
>>> s
ReplicaSetConnection([u'192.168.1.1:27017', u'192.168.1.2:27017'])
ReplicaSetConnection in PyMongo 2.1 doesn't support end_request(); it will in version 2.2 to be released in the next couple weeks. Meanwhile, there's no need to call end_request() before disconnect. Disconnect will close all sockets.