multiple requests simultaneously, making strange results - mongodb

I have a confusing problem when I make several endpoints in spring boot to get data from mongodb
mongodb version 3.6.8 with 3 cluster
I have tried to request each endpoint, and that response is normal, like this:
myhost:123/getId
{"id":"123"}
myhost:123/getName
{"name":"myname"}
When trying to do lots of requests and simultaneous. My query is normal, but sometimes the response becomes strange. like this:
myhost:123/getId
{"name":"myname"}
or
myhost:123/getName
{"id":"123"}
Thats will throw an exception, because key not found. I don't know where the wrong is, because it doesn't return an error. Only returns different results.

Related

RESTful response for data corruption in a single entity when getting multiple entities

I'm currently facing a dilemma in choosing the most appropriate response for a REST API that GET multiple entities, when one of the entities has a data corruption error. Say I have a REST API like the following:
GET /employees?department=&manager=
that returns a list of employees, perhaps with some filtering applied.
When getting the data from upstream (a DB, or another web service, etc.), I discover that the data for one of the employees that match the condition is corrupted. For example, the data cannot be parsed or does not meet some precondition that is necessary for that data entity.
What would be the most appropriate (from an API point of view) RESTful response to this? Should I continue processing all the other employees and simply ignore the error and omit it in the response, or error out with 500 Internal Server Error, or include the error in the response in a separate field while returning the other "good" employees?
I know this is somewhat opinion-based, but some advice would be greatly appreciated.
If you want to return an error, this (to me) is a server-error and I think that 500 is indeed the most appropriate error.
Whether you want to return an error, or an incomplete list with warnings depends on what your application requires it to do.

Play Framework Database Related endpoints hang after some up time

I have situation which is almost identical to the one described here: Play framework resource starvation after a few days
My application is simple, Play 2.6 + PostgreSQL + Slick 3.
Also, DB retrieval operations are Slick only and simple.
Usage scenario is that data comes in through one endpoint, gets stored into DB (there are some actors storing some data in async fashion which can fail with default strategy) and is served through rest endpoints.
So far so good.
After few days, every endpoint that has anything to do with database stops responding. Application is server on t3-medium on a single instance connected to RDS instance. Connection count to RDS is always the same and stable, mostly idling.
What I have also noticed is that database actually gets called and query gets executed, but request never ends or gets any data.
Simplest endpoint (POST) is for posting feedback - basically one liner:
feedbackService.storeFeedback(feedback.deviceId, feedback.message).map(_ => Success)
This Success thing is wrapper around Ok("something") so no magic there.
Feedback Service stores one record in DB in a Slick preferred way, nothing crazy there as well.
Once feedback post is called, I notice in psql client that INSERT query has been executed and data really ends up in database, but HTTP request never ends and no success data gets returned. In parallel, calling non DB related endpoints which do return some values like status endpoint goes through without problems.
Production logs don't show anything and restarting helps for a day or two.
I suppose some kind of resource starvation is happening, but which and where is currently beyond me.

Apache geode failing with OOM exception

I am trying to create a REST application over Apache geode. Application works well in case of limited data, but in cases when I need to get the complete data ( ~0.8M ), it fails with an OOM exception on server.
Exception :
HTTP GET Error: 500
REST OQL Response: {"cause":"Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: Java heap space"}
I tried the same approach with cache client, it works seamlessly, but we need to use REST to integrate with our application.
Any ideas to go around with this?
I am thinking on the following approaches.
Can we break the data on server side and use something like "Range" with Apache Geode? I tried this quickly, but did not work well.
Can we start getting the data into smaller buffers at the client side and start reading buffer by buffer?
Is it possible to get data out from Geode as a data-stream?
Thanks,
Abhay
Would it be possible for your to share the stack trace for the OOM you are getting? Are you saying the results are 0.8 megabytes? That doesn't seem like it should cause on OOME.
You can get ranges of data using OQL queries, but if your data set is really that small it seems like something else strange is going on with the REST API.

How to design a RESTful api for slow-generated resources or job status?

I am trying to design a RESTful api for a service that accepts a bunch of parameters and generates a large result. This is my first RESTful project. One tricky part is that the server needs some time (up to a few minutes) to generate the result. My current thought is to use POST to send in all the parameters. The server response can be a job id.
I can then retrieve the result using GET /result/{job_id}. The problem is that the result is not available for the first few minutes. Maybe I can return the resource unavailable at the beginning and the result once it is available. But this feels odd and add some odd logic in the client.
An alternative is to retrieve the job status GET /job_status/{job_id}, where the result might be running/error/done, similar to the http status code, where done status also comes with a result_id. Then I can retrieve it with GET /result/{result_id}.
Either case has some problem with what I have read about GET. In both cases, GET result is not fixed and not cacheable at the beginning while the job is still running. On the other hand, I read somewhere that it is OK to do things like GET /currentWhether or Get /currentTime, which are similar to at least my second approach. So my questions are:
Which one is better? Why?
Should I use GET for such situation?
Or neither one is OK? What would you do?
Thank you very much.
Should I use GET?
For long running operations, here is an approach which tells setting expire or max-age headers to your response properly. Here is the example Best practice for implementing long-running searches with REST
But I recommend The RESTy Long-op Protocol for your case.
Your solution will be more robust and more client friendly.

Neo4j: Cypher over REST get summary of operations

Is there any way when using the REST API to get the summary of operations that have completed without returning the nodes.
When using the web admin console after doing an operation I get a summary like
1 node inserted
2 relationships inserted
1 node deleted.
In the examples here I notice there is no example of summary information sent back to the client. I would have to return the nodes inserted to know the insert had occurred.
When doing a request over the network often it is a good idea to minimize the data response size. A quick summary would help with this. is it possible to get one from the REST endpoint?
I'm pretty sure this is not possible. It would be a nice addition, though. Have you filed a feature request?