I am writing a unit test where I check if an object can be found after being inserted in a mongodb, my unit test looks like this:
class TestReviewCRUD:
app = FastAPI()
config = dotenv_values("../.env")
app.include_router(review_router, tags=["reviews"], prefix="/review")
def setup_method(self):
self.app.db_client = MongoClient(f'mongodb://{self.config["DB_USER"]}:{self.config["DB_PASSWORD"]}#localhost:27017/')
self.app.db = self.app.db_client[self.config['TEST_DB_NAME']]
def teardown_method(self):
self.app.db_client.close()
def test_get_review(self):
with TestClient(self.app) as client:
response = self.given_a_new_review(client)
assert response.status_code == 201 # <- this works
new_review = client.get(f'/review/{response.json().get("_id")}')
assert new_review.status_code == 200 # <- this doesn't work
The element seems to be added to the database (per the 201 http code) and if I go into the docker container, I can see it in the mongo database, but running that get keeps failing, I'm not that versed in python so maybe I am missing something? My get method is structured as:
#router.get("/{id}", response_description="Get a single review by id", response_model=Review)
def find_review(id: str, request: Request):
review = request.app.db["my_db"].find_one({"_id": ObjectId(id)})
if review is not None:
return review
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"Review with ID {id} not found")
If I look for an existing ID, it works, it is failing when I insert a new object and immediately look for it
Could someone shed a light, please?
Related
I will start mentioning I am very new to Scala but I have now to maintain a legacy code where some new feature are being tried to be include.
I have the following code:
Where a list is coming as a parameter where a new output needs to be processed. However it seems like code is not waiting for the response to the external service when processing.
def historyBet(jackpotListUser : List[JackpotBetHistory])(implicit MC: AppMarkerContext) : List[LegacyJackpotHistoryResponse] =
for {
bet <- jackpotListUser
prize = jackpotIntegratorService.findJackpotByJackpotHumanId(bet.jackpotHumanId) match {
case Some(jackpot : JackpotResponse) =>
...
extra code extracting price from jackpot : JackpotResponse
...
extra code generating result with prize
} yield result
How can I do a call to jackpotIntegratorService.findJackpotByJackpotHumanId to execute at that time. instead of returning something that F[Option....?
def findJackpotByJackpotHumanId(
jackpotHumanId: JackpotHumanId
)(implicit MC: AppMarkerContext): F[Option[JackpotResponse]] =
jackpotIntegratorRepo.findJackpotByJackpotHumanId(jackpotHumanId)
where it is finally implemented as:
override def findJackpotByJackpotHumanId(
jackpotHumanId: JackpotHumanId
)(implicit mc: AppMarkerContext): IO[Option[JackpotResponse]] =
... code calling an API which return the IO.
Thanks!
I thought I could do IO.await somewhere... but not sure where or how...
because in the "historyBet" function I got a F[] when it was an IO... so what is the syntax to be able to wait for the response and the continue?
Extra Comment:
The real issue we notice is that the method call is starting (the logs shows part of it) but the caller with in the maps continues too.
prize = jackpotIntegratorService.findJackpotByJackpotHumanId
this part of the code continues even when prize, which we want the final object JackpotResponse, not the IO or F.
So, if your method needs to call an IO then it must return an IO unless you unsafeRunSync them... but, as the name suggest, you should not do that.
So the return type is now: IO[List[LegacyJackpotHistoryResponse]
And can be implemented like this:
def historyBet(jackpotListUser: List[JackpotBetHistory])(implicit MC: AppMarkerContext): IO[List[LegacyJackpotHistoryResponse]] =
jackpotListUser.traverse { bet =>
jackpotIntegratorService.findJackpotByJackpotHumanId(bet.jackpotHumanId).map {
case Some(jackpot) =>
// ...
case None =>
// ...
}
}
I've been struggling to find a solution for my problem, I hope I've come to the right place.
I have a django rest framework API which connect to a postgresql db and I run bots on my own API in order to do stuff. Here is my code :
def get_or_create_eventloop():
"""Get the eventLoop only one time (create it if does not exist)"""
try:
return asyncio.get_event_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
return asyncio.get_event_loop()
My DB class which use asyncpg to connect / create a pool :
Class DB():
def __init__(self,loop):
self.pool = loop.run_until_complete(self.connect_to_db())
def connect_to_db():
return await asyncpg.create_pool(host="host",
database="database",
user="username",
password="pwd",
port=5432)
My API class :
Class Api(APIView):
#create a loop event since its not the main thread
loop = get_or_create_eventloop()
nest_asyncio.apply() #to avoid the <loop already running> problem
#init my DB pool directly so I wont have to connect each time
db_object = DB(loop)
def post(self,request):
... #I want to be able to call "do_something()"
async def do_something(self):
...
I have my bots running and sending post/get request to my django api via aiohttp.
The problem I'm facing is :
How to implement my post function in my API so he can handle multiple requests knowing that it's a new thread each time therefore a new event loop is created AND the creation of the pool in asyncpg is LINKED to the current event loop, i.e can't create new event loop, I need to keep working on the one created at the beginning so I can access my db later (via pool.acquire etc)
This is what I tried so far without success :
def post(self,request):
self.loop.run_until_complete(self.do_something())
This create :
RuntimeError: Non-thread-safe operation invoked on an event loop other than the current one
which I understand, we are trying to call the event loop from another thread possibly
I also tried to use asyng_to_sync from DJANGO :
#async_to_sync
async def post(..):
resp = await self.do_something()
The problem here is when doing async_to_sync it CREATES a new event loop for the thread, therefore I won't be able to access my DB POOL
edit : cf https://github.com/MagicStack/asyncpg/issues/293 for that (I would love to implement something like that but can't find a way)
Here is a quick example of one of my bot (basic stuff) :
import asyncio
from aiohttp import ClientSession
async def send_req(url, session):
async with session.post(url=url) as resp:
return await resp.text()
async def run(r):
url = "http://localhost:8080/"
tasks = []
async with ClientSession() as session:
for i in range(r):
task = asyncio.asyncio.create_task(send_req(url, session))
tasks.append(task)
responses = await asyncio.gather(*tasks)
print(responses)
if __name__ == '__main__':
asyncio.run(main())
Thank you in advance
After days of looking for an answer, I found the solution for my problem. I just used the package psycopg3 instead of asyncpg (now I can put #async_to_sync to my post function and it works)
From the documentation I can see that I should be able to use WriteResult.ok, WriteResult.code and WriteResult.n in order to understand errors and the number of updated documents but this isn't working. Here is a sample of what I'm doing (using reactiveMongoDB/Play JSON Collection Plugin):
def updateOne(collName: String, id: BSONObjectID, q: Option[String] = None) = Action.async(parse.json) { implicit request: Request[JsValue] =>
val doc = request.body.as[JsObject]
val idQueryJso = Json.obj("_id" -> id)
val query = q match {
case Some(_) => idQueryJso.deepMerge(Json.parse(q.get).as[JsObject])
case None => idQueryJso
}
mongoRepo.update(collName)(query, doc, manyBool = false).map(result => writeResultStatus(result))
}
def writeResultStatus(writeResult: WriteResult): Result = {
// NOT WORKING
if(writeResult.ok) {
if(writeResult.n > 0) Accepted else NotModified
} else BadRequest
}
Can I give an alternative approach here? You said:
"in order to understand errors and the number of updated documents but this isn't working"
Why you don't use the logging functionality that Play provides? The general idea is that:
You set the logging level (e.g., only warning and errors, or errors, etc.).
You could use the log to output a message in any case, either something is ok, or it is not.
Play saves the logs of your application while it is running.
You the maintainer/developer could look into the logs to check if there is any errors.
This approach open a great possibility in the future: you could save the logs into a third-party service and put monitoring functionalities on the top of it.
Now if we look at the documentation here, you see about different log levels, and how to use the logger.
I've a MongoDB collection where I store User documents like this:
{
"_id" : ObjectId("52d14842ed0000ed0017cceb"),
"email": "joe#gmail.com",
"firstName": "Joe"
...
}
Users must be unique by email address, so I added an index for the email field:
collection.indexesManager.ensure(
Index(List("email" -> IndexType.Ascending), unique = true)
)
And here is how I insert a new document:
def insert(user: User): Future[User] = {
val json = user.asJson.transform(generateId andThen copyKey(publicIdPath, privateIdPath) andThen publicIdPath.json.prune).get
collection.insert(json).map { lastError =>
User(json.transform(copyKey(privateIdPath, publicIdPath) andThen privateIdPath.json.prune).get).get
}.recover {
throw new IllegalArgumentException(s"an user with email ${user.email} already exists")
}
}
In case of error, the code above throws an IllegalArgumentException and the caller is able to handle it accordingly. BUT if I modify the recover section like this...
def insert(user: User): Future[User] = {
val json = user.asJson.transform(generateId andThen copyKey(publicIdPath, privateIdPath) andThen publicIdPath.json.prune).get
collection.insert(json).map { lastError =>
User(json.transform(copyKey(privateIdPath, publicIdPath) andThen privateIdPath.json.prune).get).get
}.recover {
case e: Throwable => throw new IllegalArgumentException(s"an user with email ${user.email} already exists")
}
}
... I no longer get an IllegalArgumentException, but I get something like this:
play.api.Application$$anon$1: Execution exception[[IllegalArgumentException: DatabaseException['E11000 duplicate key error index: gokillo.users.$email_1 dup key: { : "giuseppe.greco#agamura.com" }' (code = 11000)]]]
... and the caller is no longer able to handle the exception as it should. Now the real questions are:
How do I handle the diverse error types (i.e. the ones provided by LastError) in the recover section?
How do I ensure the caller gets the expected exceptions (e.g. IllegalArgumentException)?
Finally I was able to manage things correctly. Here below is how to insert an user and handle possible exceptions with ReactiveMongo:
val idPath = __ \ 'id
val oidPath = __ \ '_id
/**
* Generates a BSON object id.
*/
protected val generateId = __.json.update(
(oidPath \ '$oid).json.put(JsString(BSONObjectID.generate.stringify))
)
/**
* Converts the current JSON into an internal representation to be used
* to interact with Mongo.
*/
protected val toInternal = (__.json.update((oidPath \ '$oid).json.copyFrom(idPath.json.pick))
andThen idPath.json.prune
)
/**
* Converts the current JSON into an external representation to be used
* to interact with the rest of the world.
*/
protected val toExternal = (__.json.update(idPath.json.copyFrom((oidPath \ '$oid).json.pick))
andThen oidPath.json.prune
)
...
def insert(user: User): Future[User] = {
val json = user.asJson.transform(idPath.json.prune andThen generateId).get
collection.insert(json).transform(
success => User(json.transform(toExternal).get).get,
failure => DaoServiceException(failure.getMessage)
)
}
The user parameter is a POJO-like instance with an internal representation in JSON – User instances always contain valid JSON since it is generated and validated in the constructor and I no longer need to check whether user.asJson.transform fails.
The first transform ensures there is no id already in the user and then generates a brand new Mongo ObjectID. Then, the new object is inserted in the database, and finally the result converted back to the external representation (i.e. _id => id). In case of failure, I just create a custom exception with the current error message. I hope that helps.
My experience is more with the pure java driver, so I can only comment on your strategy for working with mongo in general -
It seems to me that all you're accomplishing by doing the query beforehand is duplicating mongos uniqueness check. Even with that, you still have to percolate an exception upwards because of possible failure. Not only is this slower, but it's vulnerable to a race condition because the combination of your query + insert is not atomic. In that case you'd have
request 1: try to insert. email exists? false - Proceed with insert
request 2: try to insert. email exists? false - Proceed with insert
request 1: succeed
request 2: mongo will throw the database exception.
Wouldn't it be simpler to just let mongo throw the db exception and throw your own illegal argument if that happens?
Also, pretty sure the id will be generated for you if you omit it, and that there's a simpler query for doing your uniqueness check, if that's still the way you want to code it. At least in the java driver you can just do
collection.findOne(new BasicDBObject("email",someemailAddress))
Take a look at upsert mode of the update method (section "Insert a New Document if No Match Exists (Upsert)"): http://docs.mongodb.org/manual/reference/method/db.collection.update/#insert-a-new-document-if-no-match-exists-upsert
I asked a similar question a while back on reactivemongo's google group. You can have another case inside the recovery block to match a LastError object, query its error code, and handle the error appropriately. Here's the original question:
https://groups.google.com/forum/#!searchin/reactivemongo/alvaro$20naranjo/reactivemongo/FYUm9x8AMVo/LKyK01e9VEMJ
When trying to update a couchbase document that was previously deleted the update call fails with a CASResponse.EXISTS code rather than a CASResponse.NOT_FOUND. However a call to get the previously deleted key immediately after the delete is complete returns null as expected:
def main(args: Array[String]){
val uris = List(URI.create("http://localhost:8091/pools"))
val client = new CouchbaseClient(uris, "test", "password")
client.add("asdf", "qwer").get
client.delete("asdf").get
val result = client.asyncGets("asdf").get
assert(result == null)
val response = client.asyncCAS("asdf", 1, "bar").get
response match {
case CASResponse.NOT_FOUND => println("Document Not Found")
case CASResponse.EXISTS => println("Document exists")
}
}
Adding Thread.sleep(1000) before the update call fixes the problem, so my question is this the expected behaviour of Couchbase?
Couchbase version is 2.2.0 enterprise edition (build-821), using the Java client version 1.2.1 in Scala 2.10.2.
Thanks
I think you probably want to do:
client.delete(key, PersistTo.MASTER).get();
You can change the enum PersistTo to other options such as ONE,TWO,THREE (i.e. Master plus additional numbers of nodes). I tested your example with only one node and the above worked for me.
Have you tried use client.gets() instead of client.asyncGets().get() and client.cas() instead of client.asyncCAS().get()? I think it can solve your problem.