Total open connections reached the connection limit - unity3d

I'm running Python Flask with Waitress. I'm starting the server using the following code:
from flask import Flask, render_template, request
from waitress import serve
#app.route("/get")
def first_method():
...
#app.route("/second")
def second_method():
...
app = Flask(__name__)
app.static_folder = 'static'
serve(app, host="ip_address", port=8080)
I'm calling the server from a Webpage and also from Unity. From the webpage, I'm using the following example get request in jQuery:
$.get("/get", { variable1: data1, variable2: data2 }).done(function (data) {
...
}
In Unity I'm using the following call:
http://ip_address/get?msg=data1?data2
Unfortuantely, after some time I'm getting the error on the server total open connections reached the connection limit, no longer accepting new connections. This especially happens with Unity. I assume that for each get request a new channel/connection is established.
How can this be fixed, i.e. how can channels/connections be reused?

Related

Apache Spark in Azure Synapse Analytics - HTTP request in notebook

I use a Notebook in Synapse where I run my Python code.
I would like to make an API request from this Notebook to Microsoft Purview to send the entities.
I added the pyapacheatlas library to spark.
On my local computer, this code works fine in Visual Studio.
I need to sign in with Microsoft. I created Purview client connections using a service principal. Here is the code that I am running:
from pyapacheatlas.auth import ServicePrincipalAuthentication
from pyapacheatlas.core import PurviewClient
from pyapacheatlas.core import AtlasEntity
auth = ServicePrincipalAuthentication(
tenant_id="...",
client_id="...",
client_secret="..."
)
# Create a client to connect to your service.
client = PurviewClient(
account_name = "...",
authentication = auth
)
# Get All Type Defs
all_type_defs = client.get_all_typedefs()
I am getting an error after running for a long time:
"ConnectionError: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /.../oauth2/token (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f00effc5250>: Failed to establish a new connection: [Errno 110] Connection timed out'))"
It turned out that I can't make any HTTP requests in Notebook.
Please advise, maybe this is not provided by the functionality or is it possible to solve it?
Thank you.
Even an ordinary GET like this:
import json
import requests
r = requests.get("http://echo.jsontest.com/insert-key-here/insert-value-here/key/value")
df = sqlContext.createDataFrame([json.loads(line) for line in r.iter_lines()])
As a result I get:ConnectionError:
"HTTPConnectionPool(host='echo.jsontest.com', port=80): Max retries exceeded with url: /insert-key-here/insert-value-here/key/value (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f90b0111940>: Failed to establish a new connection: [Errno 110] Connection timed out'))"

FastAPI: Permanently running background task that listens to Postgres notifications and sends data to websocket

Minimal reproducible example:
import asyncio
import aiopg
from fastapi import FastAPI, WebSocket
dsn = "dbname=aiopg user=aiopg password=passwd host=127.0.0.1"
app = FastAPI()
class ConnectionManager:
self.count_connections = 0
# other class functions and variables are taken from FastAPI docs
...
manager = ConnectionManager()
async def send_and_receive_data(websocket: WebSocket):
data = await websocket.receive_json()
await websocket.send_text('Thanks for the message')
# then process received data
# taken from official aiopg documentation
# the function listens to PostgreSQL notifications
async def listen(conn):
async with conn.cursor() as cur:
await cur.execute("LISTEN channel")
while True:
msg = await conn.notifies.get()
async def postgres_listen():
async with aiopg.connect(dsn) as listenConn:
listener = listen(listenConn)
await listener
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.websocket("/")
async def websocket_endpoint(websocket: WebSocket):
await manager.connect(websocket)
manager.count_connections += 1
if manager.count_connections == 1:
await asyncio.gather(
send_and_receive_data(websocket),
postgres_listen()
)
else:
await send_and_receive_data(websocket)
Description of the problem:
I am building an app with Vue.js, FastAPI and PostgreSQL. In this example I attempt to use listen/notify from Postgres and implement it in the websocket. I also use a lot of usual http endpoints along with the websocket endpoint.
I want to run a permanent background asynchronous function at the start of the FastAPI app that will then send messages to all websocket clients/connections. So, when I use uvicorn main:app it should not only run the FastAPI app but also my background function postgres_listen(), which notifies all websocket users, when a new row is added to the table in the database.
I know that I can use asyncio.create_task() and place it in the on_* event, or even place it after the manager = ConnectionManager() row, but it will not work in my case! Because after any http request (for instance, read_root() function), I will get the same error described below.
You see that I use a strange way to run my postgres_listen() function in my websocket_endpoint() function only when the first client connects to the websocket. Any subsequent client connection does not run/trigger this function again. And everything works fine... until the first client/user disconnects (for example, closes browser tab). When it happens, I immediately get the GeneratorExit error caused by psycopg2.OperationalError:
Future exception was never retrieved
future: <Future finished exception=OperationalError('Connection closed')>
psycopg2.OperationalError: Connection closed
Task was destroyed but it is pending!
task: <Task pending name='Task-18' coro=<Queue.get() done, defined at
/home/user/anaconda3/lib/python3.8/asyncio/queues.py:154> wait_for=<Future cancelled>>
The error comes from the listen() function. After this error, I will not get any notification from the database as the asyncio's Task is cancelled. There is nothing wrong with the psycopg2, aiopg or asyncio. The problem is that I don't understand where to put the postgres_listen() function so it will not be cancelled after the first client disconnects. From my understanding, I can easily write a python script that will connect to the websocket (so I will be the first client of the websocket) and then run forever so I will not get the psycopg2.OperationalError exception again, but it does not seem right to do so.
My question is: where should I put postgres_listen() function, so the first connection to websocket may be disconnected with no consequences?
P.S. asyncio.shield() also does not work
I have answered this on Github as well, so I am reposting it here.
A working example can be found here:
https://github.com/JarroVGIT/fastapi-github-issues/tree/master/5015
# app.py
import queue
from typing import Any
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from asyncio import Queue, Task
import asyncio
import uvicorn
import websockets
class Listener:
def __init__(self):
#Every incoming websocket conneciton adds it own Queue to this list called
#subscribers.
self.subscribers: list[Queue] = []
#This will hold a asyncio task which will receives messages and broadcasts them
#to all subscribers.
self.listener_task: Task
async def subscribe(self, q: Queue):
#Every incoming websocket connection must create a Queue and subscribe itself to
#this class instance
self.subscribers.append(q)
async def start_listening(self):
#Method that must be called on startup of application to start the listening
#process of external messages.
self.listener_task = asyncio.create_task(self._listener())
async def _listener(self) -> None:
#The method with the infinite listener. In this example, it listens to a websocket
#as it was the fastest way for me to mimic the 'infinite generator' in issue 5015
#but this can be anything. It is started (via start_listening()) on startup of app.
async with websockets.connect("ws://localhost:8001") as websocket:
async for message in websocket:
for q in self.subscribers:
#important here: every websocket connection has its own Queue added to
#the list of subscribers. Here, we actually broadcast incoming messages
#to all open websocket connections.
await q.put(message)
async def stop_listening(self):
#closing off the asyncio task when stopping the app. This method is called on
#app shutdown
if self.listener_task.done():
self.listener_task.result()
else:
self.listener_task.cancel()
async def receive_and_publish_message(self, msg: Any):
#this was a method that was called when someone would make a request
#to /add_item endpoint as part of earlier solution to see if the msg would be
#broadcasted to all open websocket connections (it does)
for q in self.subscribers:
try:
q.put_nowait(str(msg))
except Exception as e:
raise e
#Note: missing here is any disconnect logic (e.g. removing the queue from the list of subscribers
# when a websocket connection is ended or closed.)
global_listener = Listener()
app = FastAPI()
#app.on_event("startup")
async def startup_event():
await global_listener.start_listening()
return
#app.on_event("shutdown")
async def shutdown_event():
await global_listener.stop_listening()
return
#app.get('/add_item/{item}')
async def add_item(item: str):
#this was a test endpoint, to see if new items where actually broadcasted to all
#open websocket connections.
await global_listener.receive_and_publish_message(item)
return {"published_message:": item}
#app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
q: asyncio.Queue = asyncio.Queue()
await global_listener.subscribe(q=q)
try:
while True:
data = await q.get()
await websocket.send_text(data)
except WebSocketDisconnect:
return
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
As I didn't have access to a stream of message I could have subscribed to, I created a quick script that produces a websocket, so that the app.py above could listen to that (indefinitely) to mimic your use case.
# generator.py
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
import asyncio
import uvicorn
app = FastAPI()
#app.websocket("/")
async def ws(websocket: WebSocket):
await websocket.accept()
i = 0
while True:
try:
await websocket.send_text(f"Hello - {i}")
await asyncio.sleep(2)
i+=1
except WebSocketDisconnect:
pass
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8001)
The app.py will listen to a websocket and publishes all incoming messages to all connections to the websockets in app.py.
The generator.py is a simple FastAPI app that has a websocket (that our example app.py above listens to) that emits a message every 2 seconds to every connection it gets.
To try this out:
Start generator.py (e.g. python3 generator.py on your command line when in your working folder)
Start app.py (either debug mode in VScode or same as above)
Listen to http://localhost:8000/ws (= endpoint in app.py) with several clients, you will see that they will all join in the same message streak.
NOTE: lots of this logic was inspired by Broadcaster (a python module)

NestJS Mongoose connection dies on load testing

When multiple devs use my API, multiple concurrent requests are being sent to Mongoose.
When the concurrency is high, the connection just "dies" and refuses to fulfil any new request, no matter how long I wait (hours!).
I just want to state everything is working fine on regular use. Heavy use leads the connection to crash.
My MongooseModule initialization:
MongooseModule.forRoot(DatabasesService.MONGO_FULL_URL, {
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false,
autoEncryption: {
keyVaultNamespace: DatabasesService.keyVaultNamespace,
kmsProviders: DatabasesService.kmsProviders,
extraOptions: {
mongocryptdSpawnArgs: ['--pidfilepath', '/tmp/mongocryptd.pid']
}
} as any
})
Module that imports the feature:
#Module({
imports: [MongooseModule.forFeature([{ name: 'modelName', schema: ModelNameSchema }])],
providers: [ModelNameService],
controllers: [...],
exports: [...]
})
Service:
#Injectable()
export class ModelNameService {
constructor(
#InjectModel('modelName') private modelName: Model<IModelName>
) {}
async findAll(): Promise<IModelName[]> {
const result: IModelName[] = await this.modelName.find().exec();
if (!result) throw new BadRequestException(`No result was found.`);
return result;
}
}
I've tried loadtesting using different utils, the easiest was:
ab -c 200 -n 300 -H "Authorization: Bearer $TOKEN" -m GET -b 0 https://example.com/getModelName
Any new request after the connection hangs gets stuck at ModelNameService.findAll() first line (the request to mongo).
On mongodb logs with verbosity of "-vvvvv" I can see few suspicious lines:
User Assertion: Unauthorized: command endSessions requires authentication src/mongo/db/commands.cpp
Cancelling outstanding I/O operations on connection to 127.0.0.1:33134
And I've also found that it doesn't exceed 12 open connections at the same time. It always waits to close one before opening a new one.
Other key points:
Mongoose doesn't return any value or notifies about any error. It just hangs without notifying anything.
Terminus health check able to ping the DB and returns a healthy status.
NestJS API still works - I'm able to send new requests and receive a response. Just requests that are related to the faulty connection hang.
When I inject connection and check its readyState it returns connected.
Restarting the API fixes it immediately.
MongoDB itself keeps working as normal.
Increasing Mongoose poolSize is able to handle more requests at the same time but will still crash on a larger amount of requests.
My main question here is how do I handle this case? Currently, I've added another health check to try and send a query to the problematic connection every half a minute, and k8s restarts the pod if it determines a failure. This works but it is not optimal.

I am trying to create simple flask_sockets client-server, but getting 404

I'm trying to make a socket connection between two python files for testing. My server should be uploading some data to clients that are listening. I'm trying to test it by creating some dummy client. After client connects, I'm getting
websocket._exceptions.WebSocketBadStatusException: Handshake status 404 NOT FOUND
Unfortunately I couldn't find any solution online for this error
import time
from threading import Thread
from flask import Flask
from flask_socketio import SocketIO
from flask_sockets import Sockets
from websocket import create_connection
app = Flask(__name__)
socketio = SocketIO(app)
sockets = Sockets(app)
#sockets.route('/socket_test')
def update_time(ws):
while not ws.closed:
ws.send('hello world')
time.sleep(1)
class Client(Thread):
def __init__(self):
super().__init__()
def run(self):
time.sleep(0.5)
ws = create_connection('ws://localhost:5000/socket_test')
while True:
ws.recv()
if __name__ == '__main__':
k = Client()
k.start()
socketio.run(app)
I would like client to receive hello world messages from server

Modifying the Request parameters throws Internal Server Error

When using the Python Eve Database hooks I am trying to modify the request parameters on a post call I am getting the below error
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>
Internal Server Error
</h1>
<p>The server encountered an internal error and was unable to complete your
request.Either the server is overloaded or there is an error in the
application.
</p>
And when I remove the code fragment to modify the Request Parameters the resource is created successfully.
Please find the code snippet as :-
__author__ = 'sappal'
from eve import Eve
import time
def insert_people(items):
# retrieve request parameter, if present
print items['userid']
print items['email']
items['userid']= "Tushar_Sappal" + str(int(time.time()))
items['email'] = "sappal.tushar"+str(int(time.time()))+"#gmail.com"
print items
# Creating the instance of the EVE Application
app = Eve()
app.on_insert_people += insert_people
if __name__== '__main__':
app.run(host='0.0.0.0')
items is a list so you should update your code like this:
def insert_people(items):
for item in items:
item['userid']= "Tushar_Sappal" + str(int(time.time()))
item['email'] = "sappal.tushar"+str(int(time.time()))+"#gmail.com"
While in development, you usually want to run your application in debug mode, so you can get a full stack-trace with the error:
app.run(debug=True)
Just make sure to disable the debug mode when running in production.