Does pymongo replica set client connection support auto fail over? - mongodb

I created the following mongo replica sets by using mongo cli:
> config = { _id:"repset", members:[{_id:0,host:"192.168.0.1:27017"},{_id:1,host:"192.168.0.2:27017"},{_id:2,host:"192.168.0.3:27017"}]}
> rs.initiate(config);
All the mongo servers run properly.
>>> import pymongo
>>> from pymongo import MongoClient
>>> servers = ["192.168.0.1:27017", "192.168.0.2:27017", "192.168.0.3:27017"]
>>> MongoClient(servers)
>>> xc = MongoClient()
>>> print xc
MongoClient('localhost', 27017)
>>> print xc.database_names()
[u'test_repsets', u'local', u'admin', u'test']
After I kill the local mongodb server, it shows me connection timeout error:
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused
It seems there is no auto fail over, although I defined the mongodb servers.
I am wondering if pymongo handles fail over automatically, or how this situation is handled properly?
Thank you in advance.

in Pymongo 3.x you may want to explicitly state what replica set you are connecting to. I know Pymongo 3.x switched up some of the ways it handles being given an array of servers. I got this off the Pymongo API about connections to replicas and auto failover

In your code :
MongoClient(servers)
Above line is not assigned to any variable. It should assign to variable (in your case you again created instance which causes error.)
Please add following line
>>> #MongoClient(servers) # remove this line
>>> #xc = MongoClient() # remove this line
>>> xc = MongoClient(servers) # add this line

Related

how to connect atlas mongodb with a cluster

I am trying to connect my code to atlas mongo db but i get the error below,this is my code :
from pymongo import MongoClient
client = MongoClient("mongodb+srv://username:test#cluster0.yntdf.mongodb.net/test?
retryWrites=true&w=majority")
db = client["test"]
collection = db["test"]
collection.insert_one({"_id":0, "name": "hello", "score": 5})
i got the error:
ConfigurationError: A DNS label is empty.
Anyone know how to handle this error? I installed dnspython and pymongo
Your connection string is wrong. Double check it is correct and also the IP of the calling server is whitelisted.
I got the same issue, the following code works for me with python 3.9.1
import pymongo
import urllib
MONGODB_USERNAME = urllib.parse.quote_plus('test')
MONGODB_PASSWORD = urllib.parse.quote_plus('tset#000')
MONGODB_DATABASE = 'sampledb'
MONGODB_URL = "mongodb://"+MONGODB_USERNAME+":"+MONGODB_PASSWORD+"#cluster0.wq5js.mongodb.net/"+MONGODB_DATABASE+"?retryWrites=true&w=majority"
client = pymongo.MongoClient(MONGODB_URL)

MongoDB - Show Dbs

I understood mongodb show dbslist only the dbs if they are not
empty. Why this rationale and why its implemented that way to confuse
a typical db users who want to shift from other db world to this mongo
world. Why don't they simply show all the dbs.
Rephrasing:
Once I installed mongodb instance, I logged into mongo shell and issue the command show dbs but its not listing the default db i.e. test. Why?
MongoDB lazily evaluates databases. If every time a user said db =client["someDBName"] a full database instance sprang to life then there would be a lot of empty databases obscuring the ones with data in them. Instead the client and server delay creation of the database until a collection is created. In Python you can force database and collection creation by using the create_collection command as follows:
>>> import pymongo
>>> c=pymongo.MongoClient()
>>> c["dummydb"].create_collection("dummycol")
Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True), 'dummydb'), 'dummycol')
>>> c["dummydb"].list_collection_names()
['dummycol']
>>> c.list_database_names()
['admin', 'bonkers', 'census', 'config', 'dummydb', 'dw', 'local', 'logdb', 'test']
>>> import pymongo
>>> c.list_database_names()
['admin', 'census', 'config', 'dw', 'local', 'logdb', 'test']
>>> c=pymongo.MongoClient()
>>> c["dummydb"].create_collection("dummycol")
Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True), 'dummydb'), 'dummycol')
>>> c.list_database_names()
['admin', 'census', 'config', 'dummydb', 'dw', 'local', 'logdb', 'test']
>>>

pymongo - MongoClient retryWrites=false is not working

I'm currently working on a simple python CRUD script to check MongoDB out. Turns out I'm liking it a lot, but I have found myself being unable to work with MongoDB transactions. Everytime I try to start a transaction an exception is thrown saying:
This MongoDB deployment does not support retryable writes. Please add retryWrites=false to your connection string.
And, eventhough I've already added that option to my connection string:
self._client = MongoClient('mongodb://localhost/?retryWrites=false')
self._db = self._client.workouts
self._collection = self._db.workouts
That error is still popping up when running the following lines of code:
with self._client.start_session() as s:
with s.start_transaction():
self._collection.delete_one({'_id': id}, session=s)
next = self._collection.find_one({'_id': next_id}, session=s)
return next
What can I do?
I'm running python 3.7.3, pymongo 3.9.0 and MongoDB 4.0.12.

Flask-sqlalchemy losing connection after restarting of DB server

I use flask-sqlalchemy in my application. DB is postgresql 9.3.
I have simple init of db, model and view:
from config import *
from flask import Flask, request, render_template
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://%s:%s#%s/%s' % (DB_USER, DB_PASSWORD, HOST, DB_NAME)
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
login = db.Column(db.String(255), unique=True, index=True, nullable=False)
db.create_all()
db.session.commit()
#app.route('/users/')
def users():
users = User.query.all()
return '1'
And all works fine. But when happens DB server restarting (sudo service postgresql restart), on first request to the /users/ I obtain sqlalchemy.exc.OperationalError:
OperationalError: (psycopg2.OperationalError) terminating connection due to administrator command
SSL connection has been closed unexpectedly
[SQL: ....
Is there any way to renew connection inside view, or setup flask-sqlalchemy in another way for renew connection automatically?
UPDATE.
I ended up with using clear SQLAlchemy, declaring engine, metadata and db_session for every view, where I critically need it.
It is not solution of question, just a 'hack'.
So question is open. I am sure, It will be nice to find solution for this :)
The SQLAlchemy documentation explains that the default behaviour is to handle disconnects optimistically. Did you try another request - the connection should have re-established itself ? I've just tested this with a Flask/Postgres/Windows project and it works.
In a typical web application using an ORM Session, the above condition would correspond to a single request failing with a 500 error, then the web application continuing normally beyond that. Hence the approach is “optimistic” in that frequent database restarts are not anticipated.
If you want the connection state to be checked prior to a connection attempt you need to write code that handles disconnects pessimistically. The following example code is provided at the documentation:
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
Here's some screenshots of the event being caught in PyCharm's debugger:
Windows 7 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.11, Flask-SQLAlchemy 2.1 and psycopg 2.6.1)
On first db request
After db restart
Ubuntu 14.04 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.8, Flask-SQLAlchemy 2.0 and psycopg 2.5.5)
On first db request
After db restart
In plain SQLAlchemy you can add the pool_pre_ping=True kwarg when calling the create_engine function to fix this issue.
When using Flask-SQLAlchemy you can use the same argument, but you need to pass it as a dict in the engine_options kwarg:
app.db = SQLAlchemy(app, engine_options={"pool_pre_ping": True})

mongodb: calling end_request on a ReplicaSetConnection throws database error

I am using the new ReplicaSetConnection method for making a connection to my mongodb cluster. The change really comes down to replacing pymongo.Connection with pymongo.ReplicaSetConnection. I use the connection for my purposes and then I call end_request on the connection to make sure I flush the connection before I call disconnect() on the connection. This ensures that I dont have a large collection of half-connected sockets after a long run. This works great when I use Connection, but when I use ReplicaSetConnection pymongo complains that I'm trying to run end_request() on a database object despite the fact that i am most definitely calling this against the ReplicaSetConnection object. Is this something new in pymongo or is this an error in the driver? Below is a manual run through of the problem I'm experiencing.
>>> import pymongo
>>> s = pymongo.ReplicaSetConnection("192.168.1.1:27017, 192.168.1.2:27017", replicaSet='rep1', safe=True)
>>> s
ReplicaSetConnection([u'192.168.1.1:27017', u'192.168.1.2:27017'])
>>> s.read_preference = pymongo.ReadPreference.SECONDARY
>>> s
ReplicaSetConnection([u'192.168.1.1:27017', u'192.168.1.2:27017'])
>>> type(s)
<class 'pymongo.replica_set_connection.ReplicaSetConnection'>
>>> d = s['test']
>>> s.end_request()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "build/bdist.linux-x86_64/egg/pymongo/database.py", line 696, in __call__
TypeError: 'Database' object is not callable. If you meant to call the 'end_request' method on a 'Connection' object it is failing because no such method exists.
>>> s.disconnect()
>>> s
ReplicaSetConnection([u'192.168.1.1:27017', u'192.168.1.2:27017'])
ReplicaSetConnection in PyMongo 2.1 doesn't support end_request(); it will in version 2.2 to be released in the next couple weeks. Meanwhile, there's no need to call end_request() before disconnect. Disconnect will close all sockets.