why MongoDB send error 500 when duplicated key - mongodb

when recived the answer from MongoDB, know that my error is a duplicate key but why status=500 ?, it should be 4**.
I'm using nodejs (sails/express.js)
{ "error": {
"error": "E_UNKNOWN",
"status": 500,
"summary": "Encountered an unexpected error",
"raw": {
"name": "MongoError",
"code": 11000,
"err": "E11000 duplicate key error index: eReporterDB.users.$name_1 dup key: { : \"codin\" }"
} } }

the answer is here for nodejs
Operational errors vs. programmer errors
It's helpful to divide all errors into two broad categories:
Operational errors represent run-time problems experienced by correctly-written programs. These are not bugs in the program. In
fact, these are usually problems with something else: the system
itself (e.g., out of memory or too many open files), the system's
configuration (e.g., no route to a remote host), the network (e.g.,
socket hang-up), or a remote service (e.g., a 500 error, failure to
connect, or the like). Examples include:
failed to connect to server
failed to resolve hostname
**invalid user input**
request timeout
server returned a 500 response
socket hang-up
system is out of memory
Programmer errors are bugs in the program. These are things that can always be avoided by changing the code. They can never be handled
properly (since by definition the code in question is broken).
tried to read property of "undefined"
called an asynchronous function without a callback
passed a "string" where an object was expected
passed an object where an IP address string was expected

Related

Is it possible to run more than one command at the same time on Azure Data Explorer without increasing the Instance Count?

I have a scenario in which I need to export multiple CSV's from the Azure Data Explorer for that I am using the .export Command. When I try to run this request multiple times I get the following error
*TooManyRequests (429-TooManyRequests): {
"error": {
"code": "Too many requests",
"message": "Request is denied due to throttling.",
"#type": "Kusto.DataNode.Exceptions.ControlCommandThrottledException",
"#message": "The control command was aborted due to throttling. Retrying after some backoff might succeed. CommandType: 'DataExportToFile'*
Is there a way I can handle this without increasing the Instance count.
you can alter the capacity policy: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/management/capacitypolicy

Is it possible to configure the logs with WARN, ERROR, INFO logging keys to ease the burden to monitor the system?

Is there a way to set a logging system to print message level and type to easily understand when there is a problem with mongo?
I saw that sometimes it prints information worth attention but that are not labeled to easily recognize them, as for example: warning, error, info, etc.
2019-03-18T14:57:06.683+0100 I REPL_HB [replexec-0] Error in heartbeat (requestId: 3) to 10.100.xxx.xxx:27117, response status: NetworkInterfaceExceededTimeLimit: Couldn't get a connection within the time limit
2019-03-18T14:57:12.683+0100 I ASIO [Replication] Connecting to 10.100.60.138:27117
2019-03-18T14:57:14.852+0100 I NETWORK [listener] connection accepted from 10.100.xxx.xxx:53844 #15 (11 connections now open)
2019-03-18T14:57:14.852+0100 I NETWORK [conn15] received client metadata from 10.100.xxx.xxx:53844 conn15: { driver: { name: "MongoDB Internal Client", version: "4.0.4" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
the common format for mongo log lines are
<timestamp> <severity> <component> [<context>] <message>
and what you mean by: warning, error, info, etc.
are already there by default (severity)
the severity codes are
Level Description
F Fatal- The database error has caused the database to no longer be accessible
E Error - Database errors which will stop DB execution.
W Warning - Database messages which explains potentially harmful behaviour of DB.
I Informational - Messages just for information purpose like ‘A new connection accepted’.
D Debug - Mostly useful for debugging the DB errors
you could read more about decoding the logs line here or here

Why is Swift Kitura Server Not Terminating Some Threads?

I am having a somewhat reproducible problem on a Swift server I'm running. This is a multi-threaded server, using Kitura. The basics are: After the server has been running for a period of time, download requests start needing retries from the client (usually three retries). The attempts from the client result in the server thread not terminating. On the server, the download problem shows up like this in the log:
[INFO] REQUEST /DownloadFile: ABOUT TO END ...
And then the request never terminates.
The relevant fragment code in my server looks like this:
// <snip>
Log.info(message: "REQUEST \(request.urlURL.path): ABOUT TO END ...")
do {
try self.response.end()
Log.info(message: "REQUEST \(request.urlURL.path): STATUS CODE: \(response.statusCode)")
} catch (let error) {
Log.error(message: "Failed on `end` in failWithError: \(error.localizedDescription); HTTP status code: \(response.statusCode)")
}
Log.info(message: "REQUEST \(request.urlURL.path): COMPLETED")
// <snip>
That is, the server clearly seems to hang on the call to end (a Kitura method). See also https://github.com/crspybits/SyncServerII/blob/master/Sources/Server/Setup/RequestHandler.swift#L105
Immediately before this issue came up last time, I observed the following in my server log:
[2017-07-12T15:31:23.302Z] [ERROR] [HTTPServer.swift:194 listen(listenSocket:socketManager:)] Error accepting client connection: Error code: 5(0x5), ERROR: SSL_accept, code: 5, reason: DH lib
[2017-07-12T15:31:23.604Z] [ERROR] [HTTPServer.swift:194 listen(listenSocket:socketManager:)] Error accepting client connection: Error code: 1(0x1), ERROR: SSL_accept, code: 1, reason: Could not determine error reason.
[2017-07-12T15:31:23.995Z] [ERROR] [HTTPServer.swift:194 listen(listenSocket:socketManager:)] Error accepting client connection: Error code: 1(0x1), ERROR: SSL_accept, code: 1, reason: Could not determine error reason.
[2017-07-12T15:40:32.941Z] [ERROR] [HTTPServer.swift:194 listen(listenSocket:socketManager:)] Error accepting client connection: Error code: 1(0x1), ERROR: SSL_accept, code: 1, reason: Could not determine error reason.
[2017-07-12T15:42:43.000Z] [VERBOSE] [HTTPServerRequest.swift:215 parsingCompleted()] HTTP request from=139.162.78.135; proto=https;
[INFO] REQUEST RECEIVED: /
[2017-07-12T16:32:38.479Z] [ERROR] [HTTPServer.swift:194 listen(listenSocket:socketManager:)] Error accepting client connection: Error code: 1(0x1), ERROR: SSL_accept, code: 1, reason: Could not determine error reason.
I am not sure where this is coming from in the sense that I'm not sure if one of my client's is generating this. I do not explicitly make requests to my server with "/". (I do occasionally see requests made to my server from clients that are not mine-- it is possible this is one of these). Note that all except one of these log messages are coming from Kitura, not directly from my code. My log message is [INFO] REQUEST RECEIVED: /.
If I was a betting man, I'd say the above errors put my server into a state where afterwards, I see this download/retry behavior.
My only solution at this point is to restart the server. From which point the issue doesn't immediately happen.
Thoughts?
I'm not sure if this addresses the root problem, or if it's just a work-around, but it appears to be working. With the server as stated in the question, I had been using Kitura's built-in SSL support. I have now switched to using NGINX as a front-end, and no longer use Kitura's built-in SSL support. NGINX takes care of all the HTTPS/SSL details. Since doing this (about a month ago), and with the server running for all of that intervening time, I have not experience the not terminating issue reported in this question. See also https://github.com/crspybits/SyncServerII/issues/28

OrientDB - REST API - You have reached maximum pool size for given partition

I used Apache JMeter to call OrientDB REST API to test workload of server.
I have tested with 50 concurrent user and see that ~ 30%-45% request was failed with response message as below
{
"errors": [
{
"code": 400,
"reason": "Bad request",
"content": "Error on parsing parameters from request body\u000d\u000a\u0009DB name=\"data_dev\"\u000aYou have reached maximum pool size for given partition"
}
]
}
I have checked and found no error occur on Server.
I have tried to change
script.pool.maxSize to 200, db.pool.max to 200
But this issue still occurred
Any suggestion?
UPDATED
This issue already reported on Github at here
Thanks.
Try to increase the maximum number of instances in the pool of script engines:
script.pool.maxSize
Ref: OrientDB documentation.

Can not restore backup to target instance - replicated setup, target instance non replicated setup

When trying to restore a backup to a new cloud sql instance I get the following message when using curl:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalidOperation",
"message": "This operation isn\"t valid for this instance."
}
],
"code": 400,
"message": "This operation isn\"t valid for this instance."
}
}
When trying via google cloud console, after clicking 'ok' in the 'restore instance from backup' menu nothing happens.
I'll answer even thought this is a very old question, maybe useful for someone else (would have been for me).
I just had the same exact same error, my problem was that the storage capacity for the target instance was different than the one for the source instance. My source instance was accidentally deleted so this was a bit troublesome to figure out. This check list helped me https://cloud.google.com/sql/docs/postgres/backup-recovery/restore#tips-restore-different-instance