How can I use FastAPI Routers with FastAPI-Users and MongoDB? - mongodb

I can use MongoDB with FastAPI either
with a global client: motor.motor_asyncio.AsyncIOMotorClient object, or else
by creating one during the startup event per this SO answer which refers to this "Real World Example".
However, I also want to use fastapi-users since it works nicely with MongoDB out of the box. The downside is it seems to only work with the first method of handling my DB client connection (ie global). The reason is that in order to configure fastapi-users, I have to have an active MongoDB client connection just so I can make the db object as shown below, and I need that db to then make the MongoDBUserDatabase object required by fastapi-users:
# main.py
app = FastAPI()
# Create global MongoDB connection
DATABASE_URL = "mongodb://user:paspsword#localhost/auth_db"
client = motor.motor_asyncio.AsyncIOMotorClient(DATABASE_URL, uuidRepresentation="standard")
db = client["my_db"]
# Set up fastapi_users
user_db = MongoDBUserDatabase(UserDB, db["users"])
cookie_authentication = CookieAuthentication(secret='lame secret' , lifetime_seconds=3600, name='cookiemonster')
fastapi_users = FastAPIUsers(
user_db,
[cookie_authentication],
User,
UserCreate,
UserUpdate,
UserDB,
)
After that point in the code, I can import the fastapi_users Routers. However, if I want to break up my project into FastAPI Routers of my own, I'm hosed because:
If I move the client creation to another module to be imported into both my app and my routers, then I have different clients in different event loops and get errors like RuntimeError: Task <Task pending name='Task-4' coro=<RequestResponseCycle.run_asgi() running at /usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py:389> cb=[set.discard()]> got Future <Future pending cb=[_chain_future.<locals>._call_check_cancel() at /usr/local/lib/python3.8/asyncio/futures.py:360]> attached to a different loop (touched on in this SO question)
If I user the solutions of the "Real World Example", then I get stuck on where to build my fastapi_users object in my code example: I can't do it in main.py because there's no db object yet.
I considered making the MongoDBUserDatabase object as part of the startup event code (ie within async def connect_to_mongo() from the Real World Example), but I'm not able to get that to work either since I can't see how to make it work.
How can I either
make a global MongoDB client and FastAPI-User object in a way that can be shared among my main app and several routers without "attached to a different loop" errors, or
create fancy wrapper classes and functions to set up FastAPI users with the startup trigger?

I don't think my solution is complete or correct, but I figured I'd post it in case it inspires any ideas, I'm stumped. I have run into the exact dilemma, almost seems like a design flaw..
I followed this MongoDB full example and named it main.py
At this point my app does not work. The server starts up but result results in the aforementioned "attached to a different loop" whenever trying to query the DB.
Looking for guidance, I stumbled upon the same "real world" example
In main.py added the startup and shudown event handlers
# Event handlers
app.add_event_handler("startup", create_start_app_handler(app=app))
app.add_event_handler("shutdown", create_stop_app_handler(app=app))
In dlw_api.db.events.py this:
import logging
from dlw_api.user import UserDB
from fastapi import FastAPI
from fastapi_users.db.mongodb import MongoDBUserDatabase
from motor.motor_asyncio import AsyncIOMotorClient
LOG = logging.getLogger(__name__)
DB_NAME = "dlwLocal"
USERS_COLLECTION = "users"
DATABASE_URI = "mongodb://dlw-mongodb:27017" # protocol://container_name:port
_client: AsyncIOMotorClient = None
_users_db: MongoDBUserDatabase = None
def get_users_db() -> MongoDBUserDatabase:
return _users_db
async def connect_to_db() -> None:
global _users_db
# logger.info("Connecting to {0}", repr(DATABASE_URL))
client = AsyncIOMotorClient(DATABASE_URI)
db = client[DB_NAME]
collection = db[USERS_COLLECTION]
_users_db = MongoDBUserDatabase(UserDB, collection)
LOG.info(f"Connected to {DATABASE_URI}")
async def close_db_connection(app: FastAPI) -> None:
_client.close()
LOG.info("Connection closed")
And dlw_api.events.py:
from typing import Callable
from fastapi import FastAPI
from dlw_api.db.events import close_db_connection, connect_to_db
from dlw_api.user import configure_user_auth_routes
from fastapi_users.authentication import CookieAuthentication
from dlw_api.db.events import get_users_db
COOKIE_SECRET = "THIS_NEEDS_TO_BE_SET_CORRECTLY" # TODO: <--|
COOKIE_LIFETIME_SECONDS: int = 3_600
COOKIE_NAME = "c-is-for-cookie"
# Auth stuff:
_cookie_authentication = CookieAuthentication(
secret=COOKIE_SECRET,
lifetime_seconds=COOKIE_LIFETIME_SECONDS,
name=COOKIE_NAME,
)
auth_backends = [
_cookie_authentication,
]
def create_start_app_handler(app: FastAPI) -> Callable:
async def start_app() -> None:
await connect_to_db(app)
configure_user_auth_routes(
app=app,
auth_backends=auth_backends,
user_db=get_users_db(),
secret=COOKIE_SECRET,
)
return start_app
def create_stop_app_handler(app: FastAPI) -> Callable:
async def stop_app() -> None:
await close_db_connection(app)
return stop_app
This doesn't feel correct to me, does this mean all routes that use Depends for user-auth have to be included on the server startup event handler??

The author (frankie567) of fastapi-users created a repl.it showing a solution of sorts. My discussion about this solution may provide more context but the key parts of the solution are:
Don't bother using FastAPI startup trigger along with Depends for your MongDB connectivity management. Instead, create a separate file (ie db.py) to create your DB connection and client object. Import this db object whenever needed, like your Routers, and then use it as a global.
Also create a separate users.py to do 2 things:
Create globally used fastapi_users = FastAPIUsers(...) object for use with other Routers to handle authorization.
Create a FastAPI.APIRouter() object and attach all the fastapi-user routers to it (router.include_router(...))
In all your other Routers, import both db and fastapi_users from the above as needed
Key: split your main code up into
a main.py which only import uvicorn and serves app:app.
an app.py which has your main FastAPI object (ie app) and which then attaches all our Routers, including the one from users.py with all the fastapi-users routers attached to it.
By splitting up code per 4 above, you avoid the "attached to different loop" error.

I faced similar issue, and all I have to do to get motor and fastapi run in the same loop is this:
client = AsyncIOMotorClient()
client.get_io_loop = asyncio.get_event_loop
I did not set on_startup or whatsoever.

Related

Pytest + Appium test framework

I'm very new to automation development, and currently starting to write an appium+pytest based Android app testing framework.
I managed to run tests on a connected device using this code, that seems to use unittest:
class demo(unittest.TestCase):
reportDirectory = 'reports'
reportFormat = 'xml'
dc = {}
driver = None
# testName = 'test_setup_tmotg_demo'
def setUp(self):
self.dc['reportDirectory'] = self.reportDirectory
self.dc['reportFormat'] = self.reportFormat
# self.dc['testName'] = self.testName
self.dc['udid'] = 'RF8MA2GW1ZF'
self.dc['appPackage'] = 'com.tg17.ud.internal'
self.dc['appActivity'] = 'com.tg17.ud.ui.splash.SplashActivity'
self.dc['platformName'] = 'android'
self.dc['noReset'] = 'true'
self.driver = webdriver.Remote('http://localhost:4723/wd/hub',self.dc)
# def test_function1():
# code
# def test_function2():
# code
# def test_function3():
# code
# etc...
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
As you can see all the functions are currently within 'demo' class.
The intention is to create several test cases for each part of the app (for example: registration, main screen, premium subscription, etc.). That could sum up to hundreds of test cases eventually.
It seems to me that simply continuing listing them all in this same class would be messy and would give me a very limited control. However I didn't find any other way to arrange my tests while keeping the device connected via appium.
The question is what would be the right way to organize the project so that I can:
Set up the device with appium server
Run all the test suites in sequential order (registration, main screen, subscription, etc...).
Perform the cleaning... export results, disconnect device, etc.
I hope I described the issue clearly enough. Would be happy to elaborate if needed.
Well you have a lot of questions here so it might be good to split them up into separate threads. But first of all you can learn a lot about how Appium works by checking out the documentation here. And for the unittest framework here.
All Appium cares about is the capabilities file (or variable). So you can either populate it manually or white some helper function to do that for you. Here is a list of what can be used.
You can create as many test classes(or suites) as you want and add them together in any order you wish. This helps to break things up into manageable chunks. (See example below)
You will have to create some helper methods here as well, since Appium itself will not do much cleaning. You can use the adb command in the shell for managing android devices.
import unittest
from unittest import TestCase
# Create a Base class for common methods
class BaseTest(unittest.TestCase):
# setUpClass method will only be ran once, and not every suite/test
#classmethod
def setUpClass(cls) -> None:
# Init your driver and read the capabilites here
pass
#classmethod
def tearDownClass(cls) -> None:
# Do cleanup, close the driver, ...
pass
# Use the BaseTest class from before
# You can then duplicate this class for other suites of tests
class TestLogin(BaseTest):
#classmethod
def setUpClass(cls) -> None:
super(TestLogin, cls).setUpClass()
# Do things here that are needed only once (like loging in)
def setUp(self) -> None:
# This is executed before every test
pass
def testOne(self):
# Write your tests here
pass
def testTwo(self):
# Write your tests here
pass
def tearDown(self) -> None:
# This is executed after every test
pass
if __name__ == '__main__':
# Load the tests from the suite class we created
test_cases = unittest.defaultTestLoader.loadTestsFromTestCase(TestLogin)
# If you want do add more
test_cases.addTests(TestSomethingElse)
# Run the actual tests
unittest.TextTestRunner().run(test_cases)

How to init db for tortoise-orm in sanic app?

Is there a way to register databases in tortoise-orm from my Sanic app other than calling Tortoise.init?
from tortoise import Tortoise
await Tortoise.init(
db_url='sqlite://db.sqlite3',
modules={'models': ['app.models']}
)
# Generate the schema
await Tortoise.generate_schemas()
Sanic maintainer here.
Another answer offers the suggestion of using tortoise.contrib.sanic.register_tortoise that uses before_server_start and after_server_stop listeners.
I want to add a caveat to that. If you are using Sanic in ASGI mode, then you really should be using the other listeners: after_server_start and before_server_stop.
This is because there is not really a "before" server starting or "after" server stopping when the server is outside of Sanic. Therefore, if you are implementing the suggested solution as adopted by tortoise in ASGI mode, you will receive warnings in your logs every time you spin up a server. It is still supported, but it might be an annoyance.
In such case:
#app.listener('after_server_start')
async def setup_db(app, loop):
...
#app.listener('before_server_stop')
async def close_db(app, loop):
...
Yes, you can use register_tortoise available from tortoise.contrib.sanic
It registers before_server_start and after_server_stop hooks to set-up and tear-down Tortoise-ORM inside a Sanic webserver. Check out this sanic integration example from tortoise orm.
you can use it like,
from sanic import Sanic, response
from models import Users
from tortoise.contrib.sanic import register_tortoise
app = Sanic(__name__)
#app.route("/")
async def list_all(request):
users = await Users.all()
return response.json({"users": [str(user) for user in users]})
register_tortoise(
app, db_url="sqlite://:memory:", modules={"models": ["models"]}, generate_schemas=True
)
if __name__ == "__main__":
app.run(port=5000)
models.py
from tortoise import Model, fields
class Users(Model):
id = fields.IntField(pk=True)
name = fields.CharField(50)
def __str__(self):
return f"User {self.id}: {self.name}"

Is it possible to extend Celery, so results would be store to several MongoDB collections?

I've started a new project and I want to make Celery save results to several MongoDB collections instead of one. Is there a way to do that through configs or do I need to extend Celery and Kombu to achieve that?
You don't need to modify Celery, you can extend it. That's exactly what I did for one internal project. I didn't want to touch the standard results backend (Redis in my case), but wanted to also store the tasks' state and results in MongoDB for good while enhancing the state/results at the same time.
I ended up creating a little library with class called TaskTracker that uses Celery signals machinery to achieve the goal. The key parts of the implementation look like this:
import datetime
from celery import signals, states
from celery.exceptions import ImproperlyConfigured
from pymongo import MongoClient, ReturnDocument
class TaskTracker(object):
"""Track task processing and store the state in MongoDB."""
def __init__(self, app):
self.config = app.conf.get('task_tracker')
if not self.config:
raise ImproperlyConfigured('Task tracker configuration missing')
self.tasks = set()
self._mongo = None
self._connect_signals()
#property
def mongo(self):
# create client on first use to avoid 'MongoClient opened before fork.'
# warning
if not self._mongo:
self._mongo = self._connect_to_mongodb()
return self._mongo
def _connect_to_mongodb(self):
client = MongoClient(self.config['mongodb']['uri'])
# check connection / error handling
# ...
return client
def _connect_signals(self):
signals.task_received.connect(self._on_task_received)
signals.task_prerun.connect(self._on_task_prerun)
signals.task_retry.connect(self._on_task_retry)
signals.task_revoked.connect(self._on_task_revoked)
signals.task_success.connect(self._on_task_success)
signals.task_failure.connect(self._on_task_failure)
def _on_task_received(self, sender, request, **other_kwargs):
if request.name not in self.tasks:
return
collection = self.mongo \
.get_database(self.config['mongodb']['database']) \
.get_collection(self.config['mongodb']['collection'])
collection.find_one_and_update(
{'_id': request.id},
{
'$setOnInsert': {
'name': request.name,
'args': request.args,
'kwargs': request.kwargs,
'date_received': datetime.datetime.utcnow(),
'job_id': request.message.headers.get('job_id')
},
'$set': {
'status': states.RECEIVED,
'root_id': request.root_id,
'parent_id': request.parent_id
},
'$push': {
'status_history': {
'date': datetime.datetime.utcnow(),
'status': states.RECEIVED
}
}
},
upsert=True,
return_document=ReturnDocument.AFTER)
# similarly for other signals...
def _on_task_prerun(self, sender, task_id, task, args, kwargs,
**other_kwargs):
# ...
def _on_task_retry(self, sender, request, reason, einfo, **other_kwargs):
# ...
# ...
def track(self, task):
"""Set up tracking for given task."""
# accept either task name or task instance (for use as a decorator)
if isinstance(task, str):
self.tasks.add(task)
else:
self.tasks.add(task.name)
return task
Then you need to provide configuration for MongoDB. I use YAML configuration file for Celery so it looks like this:
# standard Celery settings...
# ...
task_tracker:
# MongoDB database for storing task state and results
mongodb:
uri: "\
mongodb://myuser:mypassword#\
mymongo.mydomain.com:27017/?\
replicaSet=myreplica&tls=true&connectTimeoutMS=5000&\
w=1&wtimeoutMS=3000&readPreference=primaryPreferred&maxStalenessSeconds=-1&\
authSource=mydatabase&authMechanism=SCRAM-SHA-1"
database: 'mydatabase'
collection: 'tasks'
In your tasks module, you just create the class instance providing your Celery app and decorate your tasks:
import os
from celery import Celery
import yaml
from celery_common.tracking import TaskTracker # my custom utils library
config_file = os.environ.get('CONFIG_FILE', default='/srv/celery/config.yaml')
with open(config_file) as f:
config = yaml.safe_load(f) or {}
app = Celery(__name__)
app.conf.update(config)
tracker = TaskTracker(app)
#tracker.track
#app.task(name='mytask')
def mytask(myparam1, myparam2, *args, **kwargs):
pass
Now your tasks' state and results are going to be tracked in MongoDB, separate from the standard results backend. If you need to store it in multiple databases, you can adjust it a bit, create multiple TaskTracker instances and provide multiple decorators to your tasks.
Celery is licensed under The BSD License. The source code is on https://github.com/celery/celery
it is possible to extend Celery ?
Yes, of course. This is parts of the freedom granted by open source licenses.
I was wondering weather I can make Celery and save results to several MongoDB collections instead of one?
So you download the source code and take the necessary time and efforts to study it and modify it.
Read about forking a software development. Consider proposing your code improvements upstream (on github, with a pull request).

Accessing cache.dat through ODBC

Ok, so I am trying to extract the information from a cache.dat database sent from another business. I am trying to get at the data using the ODBC. I am able to see the globals from the samples namespace when trying to export to Access, but I can't get the data from this new database to show up.
I've tried to tackle this problem two ways. First, I simply shut down Cache, replaced the
existing database in InterSystems\TryCache\mgr\samples and restart cache. Once I restart I can see all the globals in the Management Portal from the new database. If I test the ODBC connection from the Windows ODBC administrator it connects. However, when I try to pull them into an access database using ODBC there are no tables showing up to import.
I've also tried to add the database to my Cache but it gave me the error:
ERROR #5805: ID key not unique for extent 'Config.Databases'
I tried to fool around with the values in there but to no avail. This is my first time messing with anything like this and any, ANY help would be awesome.
If you access the Management Portal do you see any table definitions defined for your namespace. If not, the application was written in CacheObjectScript with no Classes created to provide Object/SQL access. If this is the case then it could be a fair amount of work to create the classes that describe the data(global structures.)
Matt,
Did the business that provided the CACHE.DAT file indicate that you should have ODBC access to the data?
Did they provide some document describing the data/globals? If they provided a document that describes the globals you could create the classes that map the data. Depending on what you want to do this could either be a resource intensive process or not.
If you want to directly access globals you can create a stored procedure that will do so. You should consider the security implications before you do this - it will expose all data in the global to anyone with ODBC access.
Here is an example of a stored procedure that returns the values of up to 9 global subscripts, plus the value at that node. You can modify it pretty easily if you need to.
Query OneGlobal(GlobalName As %String) As %Query(ROWSPEC = "NodeValue:%String,Sub1:%String,Sub2:%String,Sub3:%String,Sub4:%String,Sub5:%String,Sub6:%String,Sub7:%String,Sub8:%String,Sub9:%String") [SqlProc]
{
}
ClassMethod OneGlobalExecute(ByRef qHandle As %Binary, GlobalName As %String) As %Status
{
S qHandle="^"_GlobalName
Quit $$$OK
}
ClassMethod OneGlobalClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = OneGlobalExecute ]
{
Quit $$$OK
}
ClassMethod OneGlobalFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = OneGlobalExecute ]
{
S Q=qHandle
S Q=$Q(#Q) b
I Q="" S Row="",AtEnd=1 Q $$$OK
S Depth=$QL(Q)
S $LI(Row,1)=$G(#Q)
F I=1:1:Depth S $LI(Row,I+1)=$QS(Q,I)
F I=Depth+1:1:9 S $LI(Row,I+1)=""
S AtEnd=0
S qHandle=Q
Quit $$$OK
}
I don't have code for you to get this from access, but for reference, to access this from python you might use (with pyodbc):
import pyodbc
import win32com.client
import urllib2
class CacheOdbcClient:
connectionString="DSN=MYCACHEDSN"
def __init__(self):
pass
def getGlobalAsOverlyLargeList(self):
connection=pyodbc.connect(self.connectionString)
cursor=connection.cursor()
cursor.execute("call MyPackageName.MyClassName_OneGlobal ?","MYGLOBAL")
list=[]
for row in cursor :
list.append((row.NodeValue,row.Sub1,row.Sub2,row.Sub3,row.Sub4,row.Sub5,row.Sub6,row.Sub7,row.Sub8,row.Sub9))
return list

Pylons with Mongodb

According to the pymongo documentation,
PyMongo is thread-safe and even provides built-in connection pooling for threaded applications.
I normally initiate my mongodb connection like this:
import pymongo
db = pymongo.Connection()['mydb']
and then I can just use it like db.users.find({'name':..})...
Does this mean that I can actually place that two lines in the lib/apps_global.py like:
class Globals(object):
def __init__(self, config):
self.cache = CacheManager(**parse_cache_config_options(config))
import pymongo
self.db_conn = pymongo.connection()
self.db = self.db_conn['simplesite']
and then in my base controller:
class BaseController(WSGIController):
def __call__(self, environ, start_response):
"""Invoke the Controller"""
# WSGIController.__call__ dispatches to the Controller method
# the request is routed to. This routing information is
# available in environ['pylons.routes_dict']
ret = WSGIController.__call__(self, environ, start_response)
# Don't forget to release the thread for mongodb
app_globals.db_conn.end_request()
return ret
And start calling app_global's db variable throughout my controllers?
I hope it is really that easy.
Ben Bangert, an author of Pylons, has written his blog engine with mongodb.
You can browse it's source code online.