Is it possible to run more than one command at the same time on Azure Data Explorer without increasing the Instance Count? - export-to-csv

I have a scenario in which I need to export multiple CSV's from the Azure Data Explorer for that I am using the .export Command. When I try to run this request multiple times I get the following error
*TooManyRequests (429-TooManyRequests): {
"error": {
"code": "Too many requests",
"message": "Request is denied due to throttling.",
"#type": "Kusto.DataNode.Exceptions.ControlCommandThrottledException",
"#message": "The control command was aborted due to throttling. Retrying after some backoff might succeed. CommandType: 'DataExportToFile'*
Is there a way I can handle this without increasing the Instance count.

you can alter the capacity policy: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/management/capacitypolicy

Related

pulumi times out importing gcp record set

I have some dns records I'm trying to import with pulumi, and they fail somewhat bluntly with this error:
Diagnostics:
gcp:dns:RecordSet (root/my.domain./NS):
error: Preview failed: refreshing urn:pulumi:root::root::gcp:dns/recordSet:RecordSet::root/my.domain./NS: Error when reading or editing DNS Record Set "my.domain.": Get "https://www.googleapis.com/dns/v1beta2/projects/root-280012/managedZones/root/rrsets?alt=json&name=my.domain.&prettyPrint=false&type=NS": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
pulumi:pulumi:Stack (root-root):
error: preview failed
I'm just getting started with pulumi, so I have no real sense of whether this is a GCP-specific problem or more general with pulumi, so apologies if this is in the wrong place.
Is this just a case of increasing a timeout limit? Is this a problem with the cli? Why would this particular request timeout? (It times out every attempt)
Appreciate any advice!
I solved this by using customTimeouts: https://www.pulumi.com/docs/intro/concepts/programming-model/#customtimeouts
When creating your resource, you can pass an options object as the last parameter. In that, you can add your customTimeouts configuration e.g. (for typescript):
{
customTimeouts: {
create: '5m',
delete: '5m',
update: '5m'
}
})

How to create/start cluster from data bricks web activity by invoking databricks rest api

I have 2 requirements:
1:I have a clusterID. I need to start the cluster from a "Wb Activity" in ADF. The activity parameters look like this:
url:https://XXXX..azuredatabricks.net/api/2.0/clusters/start
body: {"cluster_id":"0311-004310-cars577"}
Authentication: Azure Key Vault Client Certificate
Upon running this activity I am encountering with below error:
"errorCode": "2108",
"message": "Error calling the endpoint
'https://xxxxx.azuredatabricks.net/api/2.0/clusters/start'. Response status code: ''. More
details:Exception message: 'Cannot find the requested object.\r\n'.\r\nNo response from the
endpoint. Possible causes: network connectivity, DNS failure, server certificate validation or
timeout.",
"failureType": "UserError",
"target": "GetADBToken",
"GetADBToken" is my activity name.
The above security mechanism is working for other Databricks related activity such a running jar which is already installed on my databricks cluster.
2: I want to create a new cluster with the below settings:
url:https://XXXX..azuredatabricks.net/api/2.0/clusters/create
body:{
"cluster_name": "my-cluster",
"spark_version": "5.3.x-scala2.11",
"node_type_id": "i3.xlarge",
"spark_conf": {
"spark.speculation": true
},
"num_workers": 2
}
Upon calling this api, if a cluster creation is successful I would like to capture the cluster id in the next activity.
So what would be the output of the above activity and how can I access them in an immediate ADF activity?
For #2 ) Can you please check if you change the version
"spark_version": "5.3.x-scala2.11"
to
"spark_version": "6.4.x-scala2.11"
if that helps

OrientDB - REST API - You have reached maximum pool size for given partition

I used Apache JMeter to call OrientDB REST API to test workload of server.
I have tested with 50 concurrent user and see that ~ 30%-45% request was failed with response message as below
{
"errors": [
{
"code": 400,
"reason": "Bad request",
"content": "Error on parsing parameters from request body\u000d\u000a\u0009DB name=\"data_dev\"\u000aYou have reached maximum pool size for given partition"
}
]
}
I have checked and found no error occur on Server.
I have tried to change
script.pool.maxSize to 200, db.pool.max to 200
But this issue still occurred
Any suggestion?
UPDATED
This issue already reported on Github at here
Thanks.
Try to increase the maximum number of instances in the pool of script engines:
script.pool.maxSize
Ref: OrientDB documentation.

Can not restore backup to target instance - replicated setup, target instance non replicated setup

When trying to restore a backup to a new cloud sql instance I get the following message when using curl:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalidOperation",
"message": "This operation isn\"t valid for this instance."
}
],
"code": 400,
"message": "This operation isn\"t valid for this instance."
}
}
When trying via google cloud console, after clicking 'ok' in the 'restore instance from backup' menu nothing happens.
I'll answer even thought this is a very old question, maybe useful for someone else (would have been for me).
I just had the same exact same error, my problem was that the storage capacity for the target instance was different than the one for the source instance. My source instance was accidentally deleted so this was a bit troublesome to figure out. This check list helped me https://cloud.google.com/sql/docs/postgres/backup-recovery/restore#tips-restore-different-instance

why MongoDB send error 500 when duplicated key

when recived the answer from MongoDB, know that my error is a duplicate key but why status=500 ?, it should be 4**.
I'm using nodejs (sails/express.js)
{ "error": {
"error": "E_UNKNOWN",
"status": 500,
"summary": "Encountered an unexpected error",
"raw": {
"name": "MongoError",
"code": 11000,
"err": "E11000 duplicate key error index: eReporterDB.users.$name_1 dup key: { : \"codin\" }"
} } }
the answer is here for nodejs
Operational errors vs. programmer errors
It's helpful to divide all errors into two broad categories:
Operational errors represent run-time problems experienced by correctly-written programs. These are not bugs in the program. In
fact, these are usually problems with something else: the system
itself (e.g., out of memory or too many open files), the system's
configuration (e.g., no route to a remote host), the network (e.g.,
socket hang-up), or a remote service (e.g., a 500 error, failure to
connect, or the like). Examples include:
failed to connect to server
failed to resolve hostname
**invalid user input**
request timeout
server returned a 500 response
socket hang-up
system is out of memory
Programmer errors are bugs in the program. These are things that can always be avoided by changing the code. They can never be handled
properly (since by definition the code in question is broken).
tried to read property of "undefined"
called an asynchronous function without a callback
passed a "string" where an object was expected
passed an object where an IP address string was expected