BulkDocs Api used to save CouchDb document is taking more time compare to put method? - nosql

As we checked in the chrome network tab is shown both Api request and response timing.
Analysis that timings, the bulk docs Api is taking 2x of time to complete the document save in CouchDB. sometimes this 2x time is changed to 3 or 4x it depending on the waiting server response time.
At the same time, The PUT method takes 1/4 time to save the data, and this PUT request is called from another API. It looks like saving records using PUT requests is faster than using BulkDocs API.
Here, I have mentioned the request and response, and the screenshot for your reference.
BulkDocs Request:
{"docs":[{"_id":"pfm718215_2_BE1A8AC4-EB53-4C8E-B3F7-5D4FB4329963","data":{"pfm718093_1595329":null,"pfm_718215_id":null,"createdby":52803,"createdon":1665575674775,"lookupname":null,"lookupmail":null,"lastmodifiedby":52803,"lastmodifiedon":1665575674775,"guid":"Xj0JpEofDDy37Z2","name":"test","pfm_718093_1595327_id":null,"display_name":"pfm718215","couch_id":null,"couch_rev_id":null,"pfm718093_1595325":null,"pfm_718093_1595325_id":null,"pfm718093_1595327":null,"pfm_718093_1595329_id":null,"type":"pfm718215","sync_flag":"C","org_id":3}}],"new_edits":true}
BulkDocs Response :
[{
"ok": true,
"id": "pfm718215_2_BE1A8AC4-EB53-4C8E-B3F7-5D4FB4329963",
"rev": "1-05f3e8e3e96844cb51a8143891b81d16"
}]
BulkDocs Timings Screenshot :
Header
Request
Timings
PUT Request :
{"webserviceInput":{"processInfo":{"orgId":3,"userId":52803},"dataParams":{"data":{"pfm718093_1595329":null,"pfm_718215_id":null,"createdby":52803,"createdon":1665569303482,"lookupname":null,"lookupmail":null,"lastmodifiedby":52803,"lastmodifiedon":1665569303482,"guid":"DRtlY2FlKAwVHBq","name":"test","pfm_718093_1595327_id":null,"display_name":"pfm718215","couch_id":null,"couch_rev_id":null,"pfm718093_1595325":null,"pfm_718093_1595325_id":null,"pfm718093_1595327":null,"pfm_718093_1595329_id":null,"type":"pfm718215","sync_flag":"C","org_id":3}},"sessionType":"NODEJS"}}
PUT Response :
{
"ok": true,
"id": "5f1eee08c843d01257c8b698d923fb02",
"rev": "1-0f67c9b8c2acf7aead7e991e344b04df"
}
PUT Timings Screenshot :
Header
Request
Timing
CouchDB version Details :
Couchdb 3.2.0 and {“erlang_version":"20.3.8.26","javascript_engine":{"name":"spidermonkey","version":"1.8.5"}}

Related

How do I track down slow queries in Cloudant?

I have some queries running against my Cloudant service. Some of them return quickly but a small minority are slower than expected. How can I see which queries are running slowly?
IBM Cloud activity logs can be sent to LogDNA Activity Tracker - each log item has latency measurements allowing you to identify which queries are running slower than others. For example, a typical log entry will look like this:
{
"ts": "2021-11-30T22:39:58.620Z",
"accountName": "xxxxx-yyyy-zzz-bluemix",
"httpMethod": "POST",
"httpRequest": "/yourdb/_find",
"responseSizeBytes": 823,
"clientIp": "169.76.71.72",
"clientPort": 31393,
"statusCode": 200,
"terminationState": "----",
"dbName": "yourdb",
"dbRequest": "_find",
"userAgent": "nodejs-cloudant/4.5.1 (Node.js v14.17.5)",
"sslVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-CHACHA20-POLY1305",
"requestClass": "query",
"parsedQueryString": null,
"rawQueryString": null,
"timings": {
"connect": 0,
"request": 1,
"response": 2610,
"transfer": 0
},
"meta": {},
"logSourceCRN": "crn:v1:bluemix:public:cloudantnosqldb:us-south:a/abc12345:afdfsdff-dfdf34-6789-87yh-abcr45566::",
"saveServiceCopy": false
}
The timings object contains various measurements, including the response time for the query.
For compliance reasons, the actual queries are not written to the logs, so to match queries to log entries you could put a unique identifier in the query string of the request, which would appear in the rawQueryString parameter of the log entry.
For more information on logging see this blog post.
Another option is to simply measure HTTP round-trip latency.
Once you have found your slow queries, have a look at this post for ideas on how to optimise queries.

How to analyze requests (with queries) on swagger and send different response body?

Is that possible to analyze the request (based on queries) on swagger hub api 3.0?
For example, I need to reproduce the next thing.
For request getUser?id=1 swager swagger has to send response to client
{
"user_id": "1"
"user_name": "Alex",
}
For request getUser?id=2 swager swagger has to send response to client
{
"user_id": "2"
"user_name": "Bob",
}
If that possible, could you help me with this please?
I guess your question is about SwaggerHub mock server. According to the documentation, this is not supported:
Note that the mock does not support business logic, that is, it cannot send specific responses based on the input.

Proper response to client for a RESTful PUT endpoint for updating multiple entities in a single batch?

For a standard REST PUT request of to update single entity, for example, a document, using an endpoint that looks something like this:
[Route("documents/{id}")]
public void Put(int id, [FromBody]Document document)
there is a well-defined way to use HTTP status codes to communicate with the client, using an HTTP 200 status for a successful update, an HTTP 404 if document with the specified Id was not found, an HTTP 500 if there was a problem updating the record, etc.
My issue is that we have a RESTful API with potentially extremely high usage. For performance reasons, we would like to create a endpoint that will accept multiple document entities to update in a single PUT operation:
[Route("documents")]
public void Put([FromBody]IEnumerable<Document> documents)
with input such as this:
[
{"Id":1,"Name":"doc one","Author":"Fred"},
{"Id":2,"Name":"doc two","Author":"John"},
{"Id":3,"Name":"doc three","Author":"Mary"}
]
If a user submits 10 documents, and I am able only able to successfully update 9 of them, with remaining one failing due to some issue, I would like to commit the 9 successfully updated documents and then communicate to the user which updates succeeded and which ones failed.
One approach I could take is that if any of the submitted documents successfully update, return an HTTP 200. In the response object that I return to the client, I can include a list of those documents that succeeded and a list of documents that failed. For each of those that failed, I can include the reason why, along with maybe an HTTP status code for each failed document.
But should I be returning an HTTP 200 if some of the requests failed? This approach counts on the client to inspect the list of failed documents to see if there are problems. My fear is that the user will see the HTTP 200 and assume everything is fine.
The other option is that if the client submits 10 documents, and I am able to successfully update 9 of them and one fails, return the HTTP status code for the one that failed. For example, if one failed because the specified Id could not be found, return an HTTP 404, if it failed because the DB was unavailable, return an HTTP 500, etc.
This approach also has problems. For example, if two documents fail for different reasons, which HTTP status code should be returned? And does it make sense to return, for example, an HTTP 500 status for a request that successfully updated some of the items?
Do the REST guidelines give any suggestions for this issue of batch updates? Are there any recommended approaches for this issue?
HTTP Status 207 Multi Status can be used to handle batch processing.
When processing more than one entities, your API can return a 207 status response containing a list of responses:
Each entity and response share a key allowing consumer to know which response correspond to which provided entity. In the provided use case, document's Id could be used as key.
Each response contains the same data you would have received when processing the corresponding entity alone (including http status).
The RFC stated that the message is in XML but you can use JSON with your own structure.
You can take a look at Jive API which handle batch processing to see an example.
Given the input
[
{"Id":1,"Name":"doc one","Author":"Fred"},
{"Id":2,"Name":"doc two","Author":"John"},
{"Id":3,"Name":"doc three","Author":"Mary"}
]
A full success would return a 207 http status, the response containing three 200 http statuses:
[
{
"Id": 1,
"status": 200,
"data" : { data returned for a single processing }
},
{
"Id": 2,
"status": 200,
"data" : { data returned for a single processing }
},
{
"Id": 3,
"status": 200,
"data" : { data returned for a single processing }
}
]
If there's a problem with entity with id 3 like missing author:
[
{"Id":1,"Name":"doc one","Author":"Fred"},
{"Id":2,"Name":"doc two","Author":"John"},
{"Id":3,"Name":"doc three"}
]
The response will still be a 207 but will contain two 200 http statuses for id 1 and 2 and a 400 status for id 3.
[
{
"Id": 1,
"status": 200,
"data" : { data returned for a single processing }
},
{
"Id": 2,
"status": 200,
"data" : { data returned for a single processing }
},
{
"Id": 3,
"status": 400,
"data" : { data returned for a single processing 400 error }
}
]

Bing Maps REST API response does not return any coordinates even response is "OK" and X-MS-BM-WS-INFO is "1"

We have a .NET application that retrieve a geolocation based on unstructured query, e.g.,
http://dev.virtualearth.net/REST/v1/Locations?q=Australia%20Homebush%20Bay&key=
In our network the response is "OK" and returned the coordinates in JSON format, but in another network (e.g., customer network), the response is "OK" but the "coordinates" does not return any items, or "0" estimatedTotal.
When we used Fiddler, the X-MS-BM-WS-INFO is "1".
Please advise.
This is documented here: http://msdn.microsoft.com/en-us/library/ff701703.aspx under Error Handling. When the X-MS-BM-WS-INFO is "1" this indicates that the request has been rate limited. Rate limiting occurs on trial and basic Bing Maps accounts when the frequency of requests is such that your account will exceed the free usage allowance.

Youtube REST API v3 - include statistics for video in search query result

I want to perform search queries using Youtube API v3.
What I need is to retrieve video ids and statistics for each video.
From the docs I can see that statistics is not returned for video items. If I try to ask for statistics using this query:
https://www.googleapis.com/youtube/v3/search?type=video&part=snippet,statistics&q=kittens&key={MY_KEY}
I receive an error:
{
"error": {
"errors": [
{
"domain": "youtube.part",
"reason": "unknownPart",
"message": "statistics",
"locationType": "parameter",
"location": "part"
}
],
"code": 400,
"message": "statistics"
}
}
So I guess that I need to make two requests:
Perform actual search and retrieve list of video ids.
Make API request https://developers.google.com/youtube/v3/docs/videos/list to retrieve statistics for each video.
Or maybe I'm missing something and there's a way to get statistics for videos within one search query?
In the guide, they specify "the part names that you can include in the parameter value are id and snippet" when using https://www.googleapis.com/youtube/v3/search. (statistics is not an accepted value).
So I think that you have to make two requests as you say, at least that is what I'm doing. I couldn't find any other solution. I would be interested to know if there was a workaround...
To avoid redundancy of data returned and not to use bandwidth with extra data, "video search data" and "video statistics" data are decoupled in API.
You are right about two calls.
In general, to get faster response, only use the "part" s in request that you will use in your application.