How to init db for tortoise-orm in sanic app? - python-3.7

Is there a way to register databases in tortoise-orm from my Sanic app other than calling Tortoise.init?
from tortoise import Tortoise
await Tortoise.init(
db_url='sqlite://db.sqlite3',
modules={'models': ['app.models']}
)
# Generate the schema
await Tortoise.generate_schemas()

Sanic maintainer here.
Another answer offers the suggestion of using tortoise.contrib.sanic.register_tortoise that uses before_server_start and after_server_stop listeners.
I want to add a caveat to that. If you are using Sanic in ASGI mode, then you really should be using the other listeners: after_server_start and before_server_stop.
This is because there is not really a "before" server starting or "after" server stopping when the server is outside of Sanic. Therefore, if you are implementing the suggested solution as adopted by tortoise in ASGI mode, you will receive warnings in your logs every time you spin up a server. It is still supported, but it might be an annoyance.
In such case:
#app.listener('after_server_start')
async def setup_db(app, loop):
...
#app.listener('before_server_stop')
async def close_db(app, loop):
...

Yes, you can use register_tortoise available from tortoise.contrib.sanic
It registers before_server_start and after_server_stop hooks to set-up and tear-down Tortoise-ORM inside a Sanic webserver. Check out this sanic integration example from tortoise orm.
you can use it like,
from sanic import Sanic, response
from models import Users
from tortoise.contrib.sanic import register_tortoise
app = Sanic(__name__)
#app.route("/")
async def list_all(request):
users = await Users.all()
return response.json({"users": [str(user) for user in users]})
register_tortoise(
app, db_url="sqlite://:memory:", modules={"models": ["models"]}, generate_schemas=True
)
if __name__ == "__main__":
app.run(port=5000)
models.py
from tortoise import Model, fields
class Users(Model):
id = fields.IntField(pk=True)
name = fields.CharField(50)
def __str__(self):
return f"User {self.id}: {self.name}"

Related

ipywidget button with on_click utilizing async functions of fabric-sdk-py. Example to make it work?

Currently I'm implementing an interface to the Hyperledger Fabric using Jupyter and fabric-sdk-py. I want to submit a transaction and/or evaluate tranactions using Button ipywidget (simple example is):
def _check_id(b):
x = await contract.evaluate_transaction('queryUnprotectedEntry', [_id_info.value], user)
_check_id_button.on_click(_check_id)
Of course this doesn't work since await is called outside of an async function. If I use async def _check_id(b) I get problems because _check_id' was never awaited.
If I use asyncio.run or asyncio.get_event_loop().run_until_complete, it doesn't work as well with This event loop is already running
What would be the way for a button to execute an async function using on_click?
Not sure currently if this is the best solution but it works. To make everything work in the beginning of the module I used:
import nest_asyncio
nest_asyncio.apply()
afterwards I'm able to use the following:
def _check_id(b):
loop = asyncio.get_event_loop()
result = loop.run_until_complete(contract.evaluate_transaction('queryUnprotectedEntry', [_id_info.value], user)
_check_id_button.on_click(_check_id)

How can I use FastAPI Routers with FastAPI-Users and MongoDB?

I can use MongoDB with FastAPI either
with a global client: motor.motor_asyncio.AsyncIOMotorClient object, or else
by creating one during the startup event per this SO answer which refers to this "Real World Example".
However, I also want to use fastapi-users since it works nicely with MongoDB out of the box. The downside is it seems to only work with the first method of handling my DB client connection (ie global). The reason is that in order to configure fastapi-users, I have to have an active MongoDB client connection just so I can make the db object as shown below, and I need that db to then make the MongoDBUserDatabase object required by fastapi-users:
# main.py
app = FastAPI()
# Create global MongoDB connection
DATABASE_URL = "mongodb://user:paspsword#localhost/auth_db"
client = motor.motor_asyncio.AsyncIOMotorClient(DATABASE_URL, uuidRepresentation="standard")
db = client["my_db"]
# Set up fastapi_users
user_db = MongoDBUserDatabase(UserDB, db["users"])
cookie_authentication = CookieAuthentication(secret='lame secret' , lifetime_seconds=3600, name='cookiemonster')
fastapi_users = FastAPIUsers(
user_db,
[cookie_authentication],
User,
UserCreate,
UserUpdate,
UserDB,
)
After that point in the code, I can import the fastapi_users Routers. However, if I want to break up my project into FastAPI Routers of my own, I'm hosed because:
If I move the client creation to another module to be imported into both my app and my routers, then I have different clients in different event loops and get errors like RuntimeError: Task <Task pending name='Task-4' coro=<RequestResponseCycle.run_asgi() running at /usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py:389> cb=[set.discard()]> got Future <Future pending cb=[_chain_future.<locals>._call_check_cancel() at /usr/local/lib/python3.8/asyncio/futures.py:360]> attached to a different loop (touched on in this SO question)
If I user the solutions of the "Real World Example", then I get stuck on where to build my fastapi_users object in my code example: I can't do it in main.py because there's no db object yet.
I considered making the MongoDBUserDatabase object as part of the startup event code (ie within async def connect_to_mongo() from the Real World Example), but I'm not able to get that to work either since I can't see how to make it work.
How can I either
make a global MongoDB client and FastAPI-User object in a way that can be shared among my main app and several routers without "attached to a different loop" errors, or
create fancy wrapper classes and functions to set up FastAPI users with the startup trigger?
I don't think my solution is complete or correct, but I figured I'd post it in case it inspires any ideas, I'm stumped. I have run into the exact dilemma, almost seems like a design flaw..
I followed this MongoDB full example and named it main.py
At this point my app does not work. The server starts up but result results in the aforementioned "attached to a different loop" whenever trying to query the DB.
Looking for guidance, I stumbled upon the same "real world" example
In main.py added the startup and shudown event handlers
# Event handlers
app.add_event_handler("startup", create_start_app_handler(app=app))
app.add_event_handler("shutdown", create_stop_app_handler(app=app))
In dlw_api.db.events.py this:
import logging
from dlw_api.user import UserDB
from fastapi import FastAPI
from fastapi_users.db.mongodb import MongoDBUserDatabase
from motor.motor_asyncio import AsyncIOMotorClient
LOG = logging.getLogger(__name__)
DB_NAME = "dlwLocal"
USERS_COLLECTION = "users"
DATABASE_URI = "mongodb://dlw-mongodb:27017" # protocol://container_name:port
_client: AsyncIOMotorClient = None
_users_db: MongoDBUserDatabase = None
def get_users_db() -> MongoDBUserDatabase:
return _users_db
async def connect_to_db() -> None:
global _users_db
# logger.info("Connecting to {0}", repr(DATABASE_URL))
client = AsyncIOMotorClient(DATABASE_URI)
db = client[DB_NAME]
collection = db[USERS_COLLECTION]
_users_db = MongoDBUserDatabase(UserDB, collection)
LOG.info(f"Connected to {DATABASE_URI}")
async def close_db_connection(app: FastAPI) -> None:
_client.close()
LOG.info("Connection closed")
And dlw_api.events.py:
from typing import Callable
from fastapi import FastAPI
from dlw_api.db.events import close_db_connection, connect_to_db
from dlw_api.user import configure_user_auth_routes
from fastapi_users.authentication import CookieAuthentication
from dlw_api.db.events import get_users_db
COOKIE_SECRET = "THIS_NEEDS_TO_BE_SET_CORRECTLY" # TODO: <--|
COOKIE_LIFETIME_SECONDS: int = 3_600
COOKIE_NAME = "c-is-for-cookie"
# Auth stuff:
_cookie_authentication = CookieAuthentication(
secret=COOKIE_SECRET,
lifetime_seconds=COOKIE_LIFETIME_SECONDS,
name=COOKIE_NAME,
)
auth_backends = [
_cookie_authentication,
]
def create_start_app_handler(app: FastAPI) -> Callable:
async def start_app() -> None:
await connect_to_db(app)
configure_user_auth_routes(
app=app,
auth_backends=auth_backends,
user_db=get_users_db(),
secret=COOKIE_SECRET,
)
return start_app
def create_stop_app_handler(app: FastAPI) -> Callable:
async def stop_app() -> None:
await close_db_connection(app)
return stop_app
This doesn't feel correct to me, does this mean all routes that use Depends for user-auth have to be included on the server startup event handler??
The author (frankie567) of fastapi-users created a repl.it showing a solution of sorts. My discussion about this solution may provide more context but the key parts of the solution are:
Don't bother using FastAPI startup trigger along with Depends for your MongDB connectivity management. Instead, create a separate file (ie db.py) to create your DB connection and client object. Import this db object whenever needed, like your Routers, and then use it as a global.
Also create a separate users.py to do 2 things:
Create globally used fastapi_users = FastAPIUsers(...) object for use with other Routers to handle authorization.
Create a FastAPI.APIRouter() object and attach all the fastapi-user routers to it (router.include_router(...))
In all your other Routers, import both db and fastapi_users from the above as needed
Key: split your main code up into
a main.py which only import uvicorn and serves app:app.
an app.py which has your main FastAPI object (ie app) and which then attaches all our Routers, including the one from users.py with all the fastapi-users routers attached to it.
By splitting up code per 4 above, you avoid the "attached to different loop" error.
I faced similar issue, and all I have to do to get motor and fastapi run in the same loop is this:
client = AsyncIOMotorClient()
client.get_io_loop = asyncio.get_event_loop
I did not set on_startup or whatsoever.

Is it possible to extend Celery, so results would be store to several MongoDB collections?

I've started a new project and I want to make Celery save results to several MongoDB collections instead of one. Is there a way to do that through configs or do I need to extend Celery and Kombu to achieve that?
You don't need to modify Celery, you can extend it. That's exactly what I did for one internal project. I didn't want to touch the standard results backend (Redis in my case), but wanted to also store the tasks' state and results in MongoDB for good while enhancing the state/results at the same time.
I ended up creating a little library with class called TaskTracker that uses Celery signals machinery to achieve the goal. The key parts of the implementation look like this:
import datetime
from celery import signals, states
from celery.exceptions import ImproperlyConfigured
from pymongo import MongoClient, ReturnDocument
class TaskTracker(object):
"""Track task processing and store the state in MongoDB."""
def __init__(self, app):
self.config = app.conf.get('task_tracker')
if not self.config:
raise ImproperlyConfigured('Task tracker configuration missing')
self.tasks = set()
self._mongo = None
self._connect_signals()
#property
def mongo(self):
# create client on first use to avoid 'MongoClient opened before fork.'
# warning
if not self._mongo:
self._mongo = self._connect_to_mongodb()
return self._mongo
def _connect_to_mongodb(self):
client = MongoClient(self.config['mongodb']['uri'])
# check connection / error handling
# ...
return client
def _connect_signals(self):
signals.task_received.connect(self._on_task_received)
signals.task_prerun.connect(self._on_task_prerun)
signals.task_retry.connect(self._on_task_retry)
signals.task_revoked.connect(self._on_task_revoked)
signals.task_success.connect(self._on_task_success)
signals.task_failure.connect(self._on_task_failure)
def _on_task_received(self, sender, request, **other_kwargs):
if request.name not in self.tasks:
return
collection = self.mongo \
.get_database(self.config['mongodb']['database']) \
.get_collection(self.config['mongodb']['collection'])
collection.find_one_and_update(
{'_id': request.id},
{
'$setOnInsert': {
'name': request.name,
'args': request.args,
'kwargs': request.kwargs,
'date_received': datetime.datetime.utcnow(),
'job_id': request.message.headers.get('job_id')
},
'$set': {
'status': states.RECEIVED,
'root_id': request.root_id,
'parent_id': request.parent_id
},
'$push': {
'status_history': {
'date': datetime.datetime.utcnow(),
'status': states.RECEIVED
}
}
},
upsert=True,
return_document=ReturnDocument.AFTER)
# similarly for other signals...
def _on_task_prerun(self, sender, task_id, task, args, kwargs,
**other_kwargs):
# ...
def _on_task_retry(self, sender, request, reason, einfo, **other_kwargs):
# ...
# ...
def track(self, task):
"""Set up tracking for given task."""
# accept either task name or task instance (for use as a decorator)
if isinstance(task, str):
self.tasks.add(task)
else:
self.tasks.add(task.name)
return task
Then you need to provide configuration for MongoDB. I use YAML configuration file for Celery so it looks like this:
# standard Celery settings...
# ...
task_tracker:
# MongoDB database for storing task state and results
mongodb:
uri: "\
mongodb://myuser:mypassword#\
mymongo.mydomain.com:27017/?\
replicaSet=myreplica&tls=true&connectTimeoutMS=5000&\
w=1&wtimeoutMS=3000&readPreference=primaryPreferred&maxStalenessSeconds=-1&\
authSource=mydatabase&authMechanism=SCRAM-SHA-1"
database: 'mydatabase'
collection: 'tasks'
In your tasks module, you just create the class instance providing your Celery app and decorate your tasks:
import os
from celery import Celery
import yaml
from celery_common.tracking import TaskTracker # my custom utils library
config_file = os.environ.get('CONFIG_FILE', default='/srv/celery/config.yaml')
with open(config_file) as f:
config = yaml.safe_load(f) or {}
app = Celery(__name__)
app.conf.update(config)
tracker = TaskTracker(app)
#tracker.track
#app.task(name='mytask')
def mytask(myparam1, myparam2, *args, **kwargs):
pass
Now your tasks' state and results are going to be tracked in MongoDB, separate from the standard results backend. If you need to store it in multiple databases, you can adjust it a bit, create multiple TaskTracker instances and provide multiple decorators to your tasks.
Celery is licensed under The BSD License. The source code is on https://github.com/celery/celery
it is possible to extend Celery ?
Yes, of course. This is parts of the freedom granted by open source licenses.
I was wondering weather I can make Celery and save results to several MongoDB collections instead of one?
So you download the source code and take the necessary time and efforts to study it and modify it.
Read about forking a software development. Consider proposing your code improvements upstream (on github, with a pull request).

Discord Module never used?

I'm relatively confused here, and upon trying to research for an answer, I'm not seeming to find anything that makes any sense to me. I have created a discord bot with 5 cogs, and in each one I import discord, os, and from discord.ext import commands In various other cogs I import other modules such as random as the case may be, but those are the three common ones.
The problem is that in every module, import discord is grayed out (PyCharm IDE), suggesting that is never used. Despite this, my bot runs perfectly. I don't seem to be able to use things like the wait_for() command, I presume it is because it is in the discord module? Am I not setting things up correctly to use this?
I will post the initial startup module and a small snippet of another module, rather than list module. If you need more information, let me know.
initial startup:
import discord
import os
from discord.ext import commands
token = open("token.txt", "r").read()
client = commands.Bot(command_prefix = '!')
#client.command()
async def load(ctx, extension):
client.load_extension("cogs." + extension)
#client.command()
async def unload(ctx, extension):
client.unload_extension("cogs." + extension)
for filename in os.listdir("./cogs"):
if filename.endswith('.py'):
client.load_extension("cogs." + filename[:-3])
client.run(token)
another module:
import discord
from discord.ext import commands
import os
import json
from pathlib import Path
class Sheet(commands.Cog):
def __init__(self, client):
self.client = client
#commands.command()
#commands.dm_only()
async def viewchar(self, ctx):
#Snipped code here to make it shorter.
pass
#viewchar.error
async def stats_error(self, ctx, error):
if isinstance(error, commands.PrivateMessageOnly):
await ctx.send("You're an idiot, now everyone knows. Why would you want to display your character sheet "
"in a public room? PM me with the command.")
else:
raise error
def setup(client):
client.add_cog(Sheet(client))
That just means that your code doesn't directly reference the discord module anywhere. You're getting everything through the commands module.
You can remove the import discord from your code without breaking anything, because the code that relies on it will still import and use it behind the scenes.

Pylons with Mongodb

According to the pymongo documentation,
PyMongo is thread-safe and even provides built-in connection pooling for threaded applications.
I normally initiate my mongodb connection like this:
import pymongo
db = pymongo.Connection()['mydb']
and then I can just use it like db.users.find({'name':..})...
Does this mean that I can actually place that two lines in the lib/apps_global.py like:
class Globals(object):
def __init__(self, config):
self.cache = CacheManager(**parse_cache_config_options(config))
import pymongo
self.db_conn = pymongo.connection()
self.db = self.db_conn['simplesite']
and then in my base controller:
class BaseController(WSGIController):
def __call__(self, environ, start_response):
"""Invoke the Controller"""
# WSGIController.__call__ dispatches to the Controller method
# the request is routed to. This routing information is
# available in environ['pylons.routes_dict']
ret = WSGIController.__call__(self, environ, start_response)
# Don't forget to release the thread for mongodb
app_globals.db_conn.end_request()
return ret
And start calling app_global's db variable throughout my controllers?
I hope it is really that easy.
Ben Bangert, an author of Pylons, has written his blog engine with mongodb.
You can browse it's source code online.