How to create a postgres async mock up in python with asyncpg? - postgresql

We have a basic fastapi server with some http and websocket endpoints.
We're using postgres with asyncpg to do some basic CRUD operations.
One example is that there's a post endpoint that creates an item in the DB and notifies the websocket listeners:
async def notify_todo_listeners():
todos = db_handler.fetch_todos()
notify_all(todos)
#app.post("/todo")
async def post_todo(request):
todo = db_handler.insert(request.json())
asyncio.create_task(notify_todo_listeners())
return todo
And we want to test that endpoint in pytest, so we create a temp postgres DB using docker and we also patch the db_handler to be using a mock connection that we create in the test environment.
That connection is setup so that it would be rolled back once the test is finished
#pytest_asyncio.fixture(scope="function")
async def session(monkeypatch):
connection = await asyncpg.connect(CONNECTION_STRING)
transaction = connection.transaction()
await transaction.start()
async def mock_get_connection():
return connection
monkeypatch.setattr(database_handler, "get_connection", mock_get_connection)
yield connection
transaction.rollback()
async def test_post_todo(session):
async with AsyncClient(app=app, base_url="http://test") as client:
response = await client.post(some_todo_object)
assert response.status_code == 200
# some other assertions ...
The problem is when we try to test that endpoint, the part where we create a new task for notifying subscribers and using the DB to fetch the todo list raises this error:
exception=InterfaceError('cannot perform operation: another operation is in progress')
My understanding is that an async connection cannot be used across different co-routines, otherwise we get that error.
Question is, how can we properly mock this database while rolling back all changes made during each test run, while accounting for the possibility of having multiple co-routine tasks running?

Related

Total open connections reached the connection limit

I'm running Python Flask with Waitress. I'm starting the server using the following code:
from flask import Flask, render_template, request
from waitress import serve
#app.route("/get")
def first_method():
...
#app.route("/second")
def second_method():
...
app = Flask(__name__)
app.static_folder = 'static'
serve(app, host="ip_address", port=8080)
I'm calling the server from a Webpage and also from Unity. From the webpage, I'm using the following example get request in jQuery:
$.get("/get", { variable1: data1, variable2: data2 }).done(function (data) {
...
}
In Unity I'm using the following call:
http://ip_address/get?msg=data1?data2
Unfortuantely, after some time I'm getting the error on the server total open connections reached the connection limit, no longer accepting new connections. This especially happens with Unity. I assume that for each get request a new channel/connection is established.
How can this be fixed, i.e. how can channels/connections be reused?

FastAPI: Permanently running background task that listens to Postgres notifications and sends data to websocket

Minimal reproducible example:
import asyncio
import aiopg
from fastapi import FastAPI, WebSocket
dsn = "dbname=aiopg user=aiopg password=passwd host=127.0.0.1"
app = FastAPI()
class ConnectionManager:
self.count_connections = 0
# other class functions and variables are taken from FastAPI docs
...
manager = ConnectionManager()
async def send_and_receive_data(websocket: WebSocket):
data = await websocket.receive_json()
await websocket.send_text('Thanks for the message')
# then process received data
# taken from official aiopg documentation
# the function listens to PostgreSQL notifications
async def listen(conn):
async with conn.cursor() as cur:
await cur.execute("LISTEN channel")
while True:
msg = await conn.notifies.get()
async def postgres_listen():
async with aiopg.connect(dsn) as listenConn:
listener = listen(listenConn)
await listener
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.websocket("/")
async def websocket_endpoint(websocket: WebSocket):
await manager.connect(websocket)
manager.count_connections += 1
if manager.count_connections == 1:
await asyncio.gather(
send_and_receive_data(websocket),
postgres_listen()
)
else:
await send_and_receive_data(websocket)
Description of the problem:
I am building an app with Vue.js, FastAPI and PostgreSQL. In this example I attempt to use listen/notify from Postgres and implement it in the websocket. I also use a lot of usual http endpoints along with the websocket endpoint.
I want to run a permanent background asynchronous function at the start of the FastAPI app that will then send messages to all websocket clients/connections. So, when I use uvicorn main:app it should not only run the FastAPI app but also my background function postgres_listen(), which notifies all websocket users, when a new row is added to the table in the database.
I know that I can use asyncio.create_task() and place it in the on_* event, or even place it after the manager = ConnectionManager() row, but it will not work in my case! Because after any http request (for instance, read_root() function), I will get the same error described below.
You see that I use a strange way to run my postgres_listen() function in my websocket_endpoint() function only when the first client connects to the websocket. Any subsequent client connection does not run/trigger this function again. And everything works fine... until the first client/user disconnects (for example, closes browser tab). When it happens, I immediately get the GeneratorExit error caused by psycopg2.OperationalError:
Future exception was never retrieved
future: <Future finished exception=OperationalError('Connection closed')>
psycopg2.OperationalError: Connection closed
Task was destroyed but it is pending!
task: <Task pending name='Task-18' coro=<Queue.get() done, defined at
/home/user/anaconda3/lib/python3.8/asyncio/queues.py:154> wait_for=<Future cancelled>>
The error comes from the listen() function. After this error, I will not get any notification from the database as the asyncio's Task is cancelled. There is nothing wrong with the psycopg2, aiopg or asyncio. The problem is that I don't understand where to put the postgres_listen() function so it will not be cancelled after the first client disconnects. From my understanding, I can easily write a python script that will connect to the websocket (so I will be the first client of the websocket) and then run forever so I will not get the psycopg2.OperationalError exception again, but it does not seem right to do so.
My question is: where should I put postgres_listen() function, so the first connection to websocket may be disconnected with no consequences?
P.S. asyncio.shield() also does not work
I have answered this on Github as well, so I am reposting it here.
A working example can be found here:
https://github.com/JarroVGIT/fastapi-github-issues/tree/master/5015
# app.py
import queue
from typing import Any
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from asyncio import Queue, Task
import asyncio
import uvicorn
import websockets
class Listener:
def __init__(self):
#Every incoming websocket conneciton adds it own Queue to this list called
#subscribers.
self.subscribers: list[Queue] = []
#This will hold a asyncio task which will receives messages and broadcasts them
#to all subscribers.
self.listener_task: Task
async def subscribe(self, q: Queue):
#Every incoming websocket connection must create a Queue and subscribe itself to
#this class instance
self.subscribers.append(q)
async def start_listening(self):
#Method that must be called on startup of application to start the listening
#process of external messages.
self.listener_task = asyncio.create_task(self._listener())
async def _listener(self) -> None:
#The method with the infinite listener. In this example, it listens to a websocket
#as it was the fastest way for me to mimic the 'infinite generator' in issue 5015
#but this can be anything. It is started (via start_listening()) on startup of app.
async with websockets.connect("ws://localhost:8001") as websocket:
async for message in websocket:
for q in self.subscribers:
#important here: every websocket connection has its own Queue added to
#the list of subscribers. Here, we actually broadcast incoming messages
#to all open websocket connections.
await q.put(message)
async def stop_listening(self):
#closing off the asyncio task when stopping the app. This method is called on
#app shutdown
if self.listener_task.done():
self.listener_task.result()
else:
self.listener_task.cancel()
async def receive_and_publish_message(self, msg: Any):
#this was a method that was called when someone would make a request
#to /add_item endpoint as part of earlier solution to see if the msg would be
#broadcasted to all open websocket connections (it does)
for q in self.subscribers:
try:
q.put_nowait(str(msg))
except Exception as e:
raise e
#Note: missing here is any disconnect logic (e.g. removing the queue from the list of subscribers
# when a websocket connection is ended or closed.)
global_listener = Listener()
app = FastAPI()
#app.on_event("startup")
async def startup_event():
await global_listener.start_listening()
return
#app.on_event("shutdown")
async def shutdown_event():
await global_listener.stop_listening()
return
#app.get('/add_item/{item}')
async def add_item(item: str):
#this was a test endpoint, to see if new items where actually broadcasted to all
#open websocket connections.
await global_listener.receive_and_publish_message(item)
return {"published_message:": item}
#app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
q: asyncio.Queue = asyncio.Queue()
await global_listener.subscribe(q=q)
try:
while True:
data = await q.get()
await websocket.send_text(data)
except WebSocketDisconnect:
return
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
As I didn't have access to a stream of message I could have subscribed to, I created a quick script that produces a websocket, so that the app.py above could listen to that (indefinitely) to mimic your use case.
# generator.py
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
import asyncio
import uvicorn
app = FastAPI()
#app.websocket("/")
async def ws(websocket: WebSocket):
await websocket.accept()
i = 0
while True:
try:
await websocket.send_text(f"Hello - {i}")
await asyncio.sleep(2)
i+=1
except WebSocketDisconnect:
pass
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8001)
The app.py will listen to a websocket and publishes all incoming messages to all connections to the websockets in app.py.
The generator.py is a simple FastAPI app that has a websocket (that our example app.py above listens to) that emits a message every 2 seconds to every connection it gets.
To try this out:
Start generator.py (e.g. python3 generator.py on your command line when in your working folder)
Start app.py (either debug mode in VScode or same as above)
Listen to http://localhost:8000/ws (= endpoint in app.py) with several clients, you will see that they will all join in the same message streak.
NOTE: lots of this logic was inspired by Broadcaster (a python module)

How can extend default timeout for Gherkin test step?

I'm the newbie of flutter test driver. I'm trying to run the test and it's failed with some errors in console logs below. I think I need to extend the default timeout for some specific test steps but not sure how to do that. The documentation doesn't mention well about it.
I have one more question: when my app launches, it takes a long time for render, so I want the test wait for the app complete launching and then executing test. How can I do that?
This is console logs when my app just launches:
This is test failed message:
this is my own waitFor()
static Future<void> waitFor(
FlutterDriver driver, SerializableFinder finder) async {
try {
await driver.waitFor(finder, timeout: timeout);
await FlutterDriverUtils.waitForFlutter(driver, timeout: timeout);
} catch (error) {
throw ('Element does not exists => $error');
}}

NestJS Mongoose connection dies on load testing

When multiple devs use my API, multiple concurrent requests are being sent to Mongoose.
When the concurrency is high, the connection just "dies" and refuses to fulfil any new request, no matter how long I wait (hours!).
I just want to state everything is working fine on regular use. Heavy use leads the connection to crash.
My MongooseModule initialization:
MongooseModule.forRoot(DatabasesService.MONGO_FULL_URL, {
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false,
autoEncryption: {
keyVaultNamespace: DatabasesService.keyVaultNamespace,
kmsProviders: DatabasesService.kmsProviders,
extraOptions: {
mongocryptdSpawnArgs: ['--pidfilepath', '/tmp/mongocryptd.pid']
}
} as any
})
Module that imports the feature:
#Module({
imports: [MongooseModule.forFeature([{ name: 'modelName', schema: ModelNameSchema }])],
providers: [ModelNameService],
controllers: [...],
exports: [...]
})
Service:
#Injectable()
export class ModelNameService {
constructor(
#InjectModel('modelName') private modelName: Model<IModelName>
) {}
async findAll(): Promise<IModelName[]> {
const result: IModelName[] = await this.modelName.find().exec();
if (!result) throw new BadRequestException(`No result was found.`);
return result;
}
}
I've tried loadtesting using different utils, the easiest was:
ab -c 200 -n 300 -H "Authorization: Bearer $TOKEN" -m GET -b 0 https://example.com/getModelName
Any new request after the connection hangs gets stuck at ModelNameService.findAll() first line (the request to mongo).
On mongodb logs with verbosity of "-vvvvv" I can see few suspicious lines:
User Assertion: Unauthorized: command endSessions requires authentication src/mongo/db/commands.cpp
Cancelling outstanding I/O operations on connection to 127.0.0.1:33134
And I've also found that it doesn't exceed 12 open connections at the same time. It always waits to close one before opening a new one.
Other key points:
Mongoose doesn't return any value or notifies about any error. It just hangs without notifying anything.
Terminus health check able to ping the DB and returns a healthy status.
NestJS API still works - I'm able to send new requests and receive a response. Just requests that are related to the faulty connection hang.
When I inject connection and check its readyState it returns connected.
Restarting the API fixes it immediately.
MongoDB itself keeps working as normal.
Increasing Mongoose poolSize is able to handle more requests at the same time but will still crash on a larger amount of requests.
My main question here is how do I handle this case? Currently, I've added another health check to try and send a query to the problematic connection every half a minute, and k8s restarts the pod if it determines a failure. This works but it is not optimal.

Mocking MongoDB for Testing REST API designed in Flask

I have a Flask application, where the REST APIs are built using flask_restful with a MongoDB backend. I want to write Functional tests using pytest and mongomock for mocking the MongoDB to test the APIs but not able to configure that. Can anyone guide me providing an example where I can achieve the same?
Here is the fixture I am using in the conftest.py file:
#pytest.fixture(scope='module')
def test_client():
# flask_app = Flask(__name__)
# Flask provides a way to test your application by exposing the Werkzeug test Client
# and handling the context locals for you.
testing_client = app.test_client()
# Establish an application context before running the tests.
ctx = app.app_context()
ctx.push()
yield testing_client # this is where the testing happens!
ctx.pop()
#pytest.fixture(autouse=True)
def patch_mongo():
db = connect('testdb', host='mongomock://localhost')
yield db
db.drop_database('testdb')
disconnect()
db.close()
and here is the the test function for testing a post request for the creation of the user:
def test_mongo(test_client,patch_mongo):
headers={
"Content-Type": "application/json",
"Authorization": "token"
}
response=test_client.post('/users',headers=headers,data=json.dumps(data))
print(response.get_json())
assert response.status_code == 200
The issue is that instead of using the testdb, pytest is creating the user in the production database. is there something missing in the configuration?