In my web application, I use Flask as framework and MongoDB as persistent layer. There are multiple libraries to connect to MongoDB. I am currently using the low-level lib pyMongo. However, I would like to combine it with MongoEngine for some models.
The only approach I see is to create an instance of both clients. This looks a big doggy. Is there a simpler way to combine these libraries (pyMongo, MongoEngine) such that they use the same database (with different collections).
Its currently not possible to use an existing Pymongo client to connect MongoEngine but you can do the opposite; if you connect MongoEngine, you can retrieve its underlying pymongo client or database instances.
from mongoengine import connect, get_db, Document, StringField
conn = connect() # connects to the default "test" database on localhost:27017
print(conn) # pymongo.MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, read_preference=Primary())
db = get_db() # pymongo.Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, read_preference=Primary()), u'test')
print(db)
class Person(Document):
name = StringField()
coll = Person._get_collection()
print(coll) # pymongo.Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, read_preference=Primary()), u'test'), u'person')
Related
I understood mongodb show dbslist only the dbs if they are not
empty. Why this rationale and why its implemented that way to confuse
a typical db users who want to shift from other db world to this mongo
world. Why don't they simply show all the dbs.
Rephrasing:
Once I installed mongodb instance, I logged into mongo shell and issue the command show dbs but its not listing the default db i.e. test. Why?
MongoDB lazily evaluates databases. If every time a user said db =client["someDBName"] a full database instance sprang to life then there would be a lot of empty databases obscuring the ones with data in them. Instead the client and server delay creation of the database until a collection is created. In Python you can force database and collection creation by using the create_collection command as follows:
>>> import pymongo
>>> c=pymongo.MongoClient()
>>> c["dummydb"].create_collection("dummycol")
Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True), 'dummydb'), 'dummycol')
>>> c["dummydb"].list_collection_names()
['dummycol']
>>> c.list_database_names()
['admin', 'bonkers', 'census', 'config', 'dummydb', 'dw', 'local', 'logdb', 'test']
>>> import pymongo
>>> c.list_database_names()
['admin', 'census', 'config', 'dw', 'local', 'logdb', 'test']
>>> c=pymongo.MongoClient()
>>> c["dummydb"].create_collection("dummycol")
Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True), 'dummydb'), 'dummycol')
>>> c.list_database_names()
['admin', 'census', 'config', 'dummydb', 'dw', 'local', 'logdb', 'test']
>>>
I have a MongoDB database hosted on mlab and I would like to use PyMODM as my object modeling library.
This is my code so far:
from pymodm import connect, MongoModel, fields
connect = connect('mongodb://user:pass#ds119788.mlab.com/db')
class Test(MongoModel):
user = fields.CharField()
if __name__ == "__main__":
test = Test("test")
test.save()
But it gives me this error :
pymongo.errors.ServerSelectionTimeoutError: ds119788.mlab.com:27017: [Errno 61] Connection refused
Am I missing something?
You need to use the MongoDB URI provided by mlab for your account. The URI should contain the port number to connect to.
For example, it should look like :
connect = connect('mongodb://user:password#ds119788.mlab.com:63123/databaseName')
I have made changes to source code, I require to run all testcases to check its effect using command
./gradlew check
I am having the mongodb running in remote machine. Can anyone help me in configuring java mongodb driver with remotely running mongodb.
you need to import the java driver to your project.
import com.mongodb.MongoClient;
import com.mongodb.client.MongoDatabase;
then you need to connect to the mongoDB on your server, it can be localhost or it can be your server. also you can choose which port to use:
MongoClient mongoClient = new MongoClient("localhost", 27017);
then you can connect to your db:
MongoDatabase db = mongoClient.getDatabase("test");
and to connect to one of your collections, and make actions on it:
db.getCollection("restaurants").insertOne(
new Document("address",
new Document()
.append("street", "2 Avenue")
.append("zipcode", "10075")
.append("building", "1480")
.append("coord", asList(-73.9557413, 40.7720266)))
.append("borough", "Manhattan")
.append("cuisine", "Italian")
.append("grades", asList(
new Document()
.append("grade", "A")
.append("score", 11),
new Document()
.append("grade", "B")
.append("score", 17)))
.append("name", "Vella")
.append("restaurant_id", "41704620"));
One can pass the connection string while starting testcases
./gradlew check -Dorg.mongodb.test.uri=mongodb://example.com:27017/
I use flask-sqlalchemy in my application. DB is postgresql 9.3.
I have simple init of db, model and view:
from config import *
from flask import Flask, request, render_template
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://%s:%s#%s/%s' % (DB_USER, DB_PASSWORD, HOST, DB_NAME)
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
login = db.Column(db.String(255), unique=True, index=True, nullable=False)
db.create_all()
db.session.commit()
#app.route('/users/')
def users():
users = User.query.all()
return '1'
And all works fine. But when happens DB server restarting (sudo service postgresql restart), on first request to the /users/ I obtain sqlalchemy.exc.OperationalError:
OperationalError: (psycopg2.OperationalError) terminating connection due to administrator command
SSL connection has been closed unexpectedly
[SQL: ....
Is there any way to renew connection inside view, or setup flask-sqlalchemy in another way for renew connection automatically?
UPDATE.
I ended up with using clear SQLAlchemy, declaring engine, metadata and db_session for every view, where I critically need it.
It is not solution of question, just a 'hack'.
So question is open. I am sure, It will be nice to find solution for this :)
The SQLAlchemy documentation explains that the default behaviour is to handle disconnects optimistically. Did you try another request - the connection should have re-established itself ? I've just tested this with a Flask/Postgres/Windows project and it works.
In a typical web application using an ORM Session, the above condition would correspond to a single request failing with a 500 error, then the web application continuing normally beyond that. Hence the approach is “optimistic” in that frequent database restarts are not anticipated.
If you want the connection state to be checked prior to a connection attempt you need to write code that handles disconnects pessimistically. The following example code is provided at the documentation:
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
Here's some screenshots of the event being caught in PyCharm's debugger:
Windows 7 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.11, Flask-SQLAlchemy 2.1 and psycopg 2.6.1)
On first db request
After db restart
Ubuntu 14.04 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.8, Flask-SQLAlchemy 2.0 and psycopg 2.5.5)
On first db request
After db restart
In plain SQLAlchemy you can add the pool_pre_ping=True kwarg when calling the create_engine function to fix this issue.
When using Flask-SQLAlchemy you can use the same argument, but you need to pass it as a dict in the engine_options kwarg:
app.db = SQLAlchemy(app, engine_options={"pool_pre_ping": True})
I created the following mongo replica sets by using mongo cli:
> config = { _id:"repset", members:[{_id:0,host:"192.168.0.1:27017"},{_id:1,host:"192.168.0.2:27017"},{_id:2,host:"192.168.0.3:27017"}]}
> rs.initiate(config);
All the mongo servers run properly.
>>> import pymongo
>>> from pymongo import MongoClient
>>> servers = ["192.168.0.1:27017", "192.168.0.2:27017", "192.168.0.3:27017"]
>>> MongoClient(servers)
>>> xc = MongoClient()
>>> print xc
MongoClient('localhost', 27017)
>>> print xc.database_names()
[u'test_repsets', u'local', u'admin', u'test']
After I kill the local mongodb server, it shows me connection timeout error:
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused
It seems there is no auto fail over, although I defined the mongodb servers.
I am wondering if pymongo handles fail over automatically, or how this situation is handled properly?
Thank you in advance.
in Pymongo 3.x you may want to explicitly state what replica set you are connecting to. I know Pymongo 3.x switched up some of the ways it handles being given an array of servers. I got this off the Pymongo API about connections to replicas and auto failover
In your code :
MongoClient(servers)
Above line is not assigned to any variable. It should assign to variable (in your case you again created instance which causes error.)
Please add following line
>>> #MongoClient(servers) # remove this line
>>> #xc = MongoClient() # remove this line
>>> xc = MongoClient(servers) # add this line