MongoDb with FastAPI - mongodb

I am playing around with FastAPI a bit and wanted to connect it to a MongoDB database. I however am confused which ODM to choose between motor which is async and mongoengine. Also, in the NoSQL example here they have created a new bucket and also the called the code to connect to db every time it is used. However, both motor and mongoengine seem to prefer a global connection. So what would be a good way to connect to mongodb?

I believe you already got your answers in the issue forums of the Fastapi project on Github: Issue 452 (closed). But I'll recap the solutions here for future reference:
In short, you can use either motor or mongoengine, Fastapi supports both and you can reuse a global client object that's started and ended with your app process.
Some context details to (hopefully) clarify these technologies and their relationships:
The official MongoDB driver for Python is pymongo. Under the hoods, both MongoEngine and Motor use Pymongo. Pymongo implements a direct client for MongoDB (daemons) and offers a Python API to make requests.
If you wanted to, you could use pymongo with Fastapi directly. (On th SQL side of things, this would be equivalent to using psycopg2 in Flask directly without going through something like SQLAlchemy.)
MongoEngine is an ODM (Object-Document Mapper). It offers a Python object-oriented API that you can use in your application to work more comfortably and when it comes to the actual DB requests, MongoEngine will use pymongo.
Motor is a wrapper for pymongo that makes it non-blocking (allowing async/await). It uses an event-loop, either through Tornado or through asyncio. If you are using Fastapi with uvicorn, uvicorn will implement async functionality with uvloop. In short, using Motor with FastAPI, async should "just work". Unfortunately, Motor does not implement an ODM. In this sense it is more similar to pymongo.
Fastapi handles the requests from clients (using Starlette), but it will let you implement your own connection to MongoDB. So you are not restricted to any particular choice, but you are mostly on your own (a la Flask).
You can use the startup/shutdown hooks of your FastAPI app to start/stop your Motor/MongoEngine client. You don't need to worry about your client object not persisting due to multiprocess issues, because Fastapi is single-threaded.
#app.on_event("startup")
async def create_db_client():
# start client here and reuse in future requests
#app.on_event("shutdown")
async def shutdown_db_client():
# stop your client here
An example implementation of motor with Fastapi can be found here.

I recently created an Async Mongo ODM well suited for FastAPI: ODMantic.
app = FastAPI()
engine = AIOEngine()
class Tree(Model):
"""This model can be used either as a Pydantic model or
saved to the database"""
name: str
average_size: float
discovery_year: int
#app.get("/trees/", response_model=List[Tree])
async def get_trees():
trees = await engine.find(Tree)
return trees
#app.put("/trees/", response_model=Tree)
async def create_tree(tree: Tree):
await engine.save(tree)
return tree
You can have a look to the FastAPI example for a more detailed example.

I disagree with accepted answer saying fastapi is single threaded.
As far as I can tell fastapi is only single threaded if you handle the requestions using an async function. So if you dont use async under the hood starlette should use mutliple threads from a threadpool to handle multiple requests.
For more information on threading model and async in fastapi/starlette this is a good intro https://fastapi.tiangolo.com/async/
https://stackoverflow.com/questions/70446584/how-does-fastapi-uvicorn-parallelize-requests#:~:text=For%20endpoints%20defined%20with%20def%20%28not%20async%20def%29%2C,can%20be%20controlled.%20This%20question%20is%20addressed%20here.

Related

where and how to write pre/post query action in redwoodJS?

I'm in my first days on learning redwoodJS.
I'm just jump in from Django.
All I know, redwood web app flow like this :
web-client will call the API (apollo graphQL),
API side is a Prisma-Client.
In Django, we can write a 'signal' that will be called pre/post query.
I.e : pre-delete, post-delete, pre-add, post-add,etc etc.
Signal is 'attached' in models.py
My question : is there any doc that elaborate on how and where to write that 'signal' in RedwoodJS?
Looks like Prisma have 'middleware' approach for this, but i don't know where and how to do it in redwoodJS.
Sincerely
-bino-
A Redwood API directory includes Services, used by your GraphQL API or any other place in your backend code. A Service function typically imports the db object, which is the Prisma Client. From there, you can use Prisma Middleware. Here are a couple of related links.
Redwood Services:
https://redwoodjs.com/docs/services
Prisma Middleware:
https://www.prisma.io/docs/concepts/components/prisma-client/middleware

Can I assume that all databases will return a promise?

I am designing a set of unit and integration tests with a friend of mine, and we had a doubt. We know the answer, at least, what is more likely to be true. However, we would like to hear your thoughts.
We are designing a test for MongoDB, and we expect that we should receive a promise, after asking to save a document. So far so good.
What about if we change the database??? can we assume for sure that all databases when queried will return a promise??
I guess regarding the _id, it depends from database to database, we are using _id from MongoDB for testing reasons.
We are using the following test using in Jest
create://this method do exist on service, however, at the moment of testing, it is empty, just a placeholder
jest.fn().mockImplementation((cat: CreateCatDto) =>
Promise.resolve({ _id: 'a uuid', ...cat })
)
The idea is to design backends that do not depend on the database, but for testing and developing reasons, we are using MongoDB and PostgreSQL.
Keep in mind the term promise doesn't exist as a concept for all databases, and to provide a conclusive answer to your question for all databases is not possible.
That being said, if by promise you mean the primary key or identity (in general database terms) after inserting new data, then the answer is no, there is no guarantee, not even on PostgreSQL that'll you be able to do that. It's possible for tables, even in PostgreSQL, to exist without those constraints.
Otherwise, if by promise, you mean specifically the concept in a procedural or functional language like JavaScript (as your example code in your update indicates), then yes, you should always receive a promise object if your application code is utilizing asynchronous calls appropriately.
But that would be regardless of what the asynchronous call was to whether it's a database directly (and regardless of what database system that was), an API endpoint, or another piece of application code. Also, in that case, your question (or any follow up questions) would be better suited for StackOverflow.com.
Can I assume that all databases return a promise?
No. Most if not all database wire protocols are synchronous, meaning the client blocks until it gets a response. Even the databases that expose some sort of RESTful APIs are synchronous, because HTTP.
Some client-side drivers may wrap this synchronous logic and exhibit asynchronous behaviour by returning something like JavaScrpt Promises or Java Futures, but it is entirely up to the driver implementation you choose to use.

Vertx to mongoDB connections

I'm working on a Java/vertx project where the backend is MongoDB (I used to work with Elixir/Erlang since some time, and I'm quite new to vertx but I believe it's the best fit). Basically, I have an http API handled by some HttpServerVerticles which need to store data to (or retrieve data from) the mongo db and to send the appropriate reply to the API caller. I'm looking for the right pattern to implement the queries and the handling of the replies.
From the official guide and some tutorials, I see that for a relational JDBC database, it is necessary to define a dedicated verticle that will handle queries asynchronously. This was my first try with the mongo client but it introduces a lot of boilerplate.
On the other hand, from the mongo client documentation I read that it's Completely non-blocking and that it has its own connection pool. Does that mean that we can safely (from vertx event loop point of view), define and use the mongo client directly in the http verticle ?
Is there any alternative pattern ?
Versions : vertx:3.5.4 / mongodb:4.0.3
It's like that: mongo connection pool is exactly like SQL-db pool synchronous and blocking in it's nature, but is wrapped with non-blocking vert.x API around.
So, instead of a normal blocking way of
JsonObject obj = mongo.get( someQuery )
you have rather a non-blocking call out of the box:
mongo.findOne( 'collectionName', someQuery ){ AsyncResult<JsonObject> res ->
JsonObject obj = res.result()
doStuff( obj )
}
That means, that you can safely use it directly on the event-loop in any type of verticle without reinventing the asyncronous wheel over and over again.
At our client we use mongodb-driver-rx. Vertx has support for RX (vertx-rx-java) and it fits pretty well on mongodb-driver-rx.
For more information see:
https://mongodb.github.io/mongo-java-driver-rx/
https://vertx.io/docs/vertx-rx/java/
https://github.com/vert-x3/vertx-examples/blob/master/rxjava-2-examples/src/main/java/io/vertx/example/reactivex/database/mongo/Client.java

How to specify read preference in Meteor's mongo queries

In Meteor Mongo, how to specify the readPref to primary|secondary in Meteor Mongo Query.
I hope the following provides a better understanding of the relationship between Meteor and Mongo.
Meteor collections for more comfort
Meteor provides you with the full mongo functionality. However for comfort it provides a wrapped API of a mongo collection that integrates best with the Meteor environment. So if you import Mongo via
import { Mongo } from 'meteor/mongo'
you primarily import the wrapped mongo collection where operations are executed in a Meteor fiber. The cursor that is returned by queries of these wrapped collections are also not the "natural" cursors but also wrapped cursors to be Meteor optimized.
If you try to access a native feature on these instances that is not implemented you will receive an error. In your case:
import { Meteor } from 'meteor/meteor';
import { Random } from 'meteor/random';
const ExampleCollection = new Mongo.Collection('examples')
Meteor.startup(() => {
// code to run on server at startup
ExampleCollection.insert({ value: Random.id() })
const docsCursor = ExampleCollection.find();
docsCursor.readPref('primary')
});
Leads to
TypeError: docsCursor.readPref is not a function
Accessing the node mongo driver collections
The good news is, you can access the layer underneath via Collection.rawCollection() where you have full access to the node Mongo driver. This is because under the hood Meteor's Mongo.Collection and it's Cursor are making use of this native driver in the end.
Now you will find two other issues:
readPref is named in a node-mongo cursor cursor.setReadPreference (3.1 API).
Cursor.fetch does not exist but is named cursor.toArray which (as many native operations do) returns a Promise
So to finally answer your question
you can do the following:
import { Meteor } from 'meteor/meteor';
import { Random } from 'meteor/random';
const ExampleCollection = new Mongo.Collection('examples')
Meteor.startup(() => {
// code to run on server at startup
ExampleCollection.insert({ value: Random.id() })
const docsCursor = ExampleCollection.rawCollection().find();
docsCursor.setReadPreference('primary')
docsCursor.toArray().then((docs) => {
console.log(docs)
}).catch((err)=> console.error(err))
});
Summary
By using collection.rawCollection() you an have access to the full spectrum of the node mongo driver API
You are on your own to integrate the operations, cursors and results (Promises) into your environment. Good helpers are Meteor.bindEnvironment and Meteor.wrapAsync
Beware of API changes of the node-mongo driver. On the one hand the mongo version that is supported by the driver, on the other hand the driver version that is supported by Meteor.
Note that it is easier to "mess up" things with the native API but it also gives you a lot of new options. Use with care.

better solution than timertask with scala in play framework

I need to regularly check the database for updated records. I currently use TimerTask which works fine. However, I've found its efficiency is not good and consumes a lot of server resouces. Is there a solution which can fulfill my requirement but is better?
def checknewmessages() = Action{
request =>
TimerTask(5000){
//code to check database
}
}
I can think of two solutions:
You can use the ReactiveMongo driver for Play which is completely non-blocking and async and capped collection in Mongo DB.
Please see this for an example -
https://github.com/sgodbillon/reactivemongo-tailablecursor-demo
How to listen for changes to a MongoDB collection?
If you are using a database that doesn't support a push mechanisms you can implement that using an Actor by scheduling messages to itself at regular intervals.
If your logic is in your database (stored procedures etc) you could simply create a cron job.
You could also create a command line script that encapsulates the logic and schedule (cron again).
If you have your logic in your web application, you could again create a cron job that simply makes an API call to your app.