I want to get some statistics about the job I'm running on my pool, and for that I am trying to use the JobStatistics class, but I have been getting job.Statistics as null in most of my runs except for few where the result was magically not null. I read in a documentation (https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.batch.cloudjob.statistics?view=azurebatch-6.1.0#Microsoft_Azure_Batch_CloudJob_Statistics) that for the statistics results not to be null, I need to use an expand clause with DetailLevel, but each time I do, I get the error: "operation returned an invalid status code 'badrequest' ". This is what I have for that.
ODATADetailLevel detailExJob = new ODATADetailLevel();
detailExJob.SelectClause = "id,executionInfo,stats";
detailExJob.ExpandClause = "id,executionInfo,stats";
await job.RefreshAsync(detailExJob);
What am I missing here? How can I get job.Statistics not to be null?
Thanks!
I'll try to answer your question, but it looks like you have two separate issues.
Job lifetime statistics may not be immediately available. The Batch service
performs periodic roll-up of statistics. I believe the typical delay is about 30minutes, but this is not documented.
The expand clause currently only supports stats. If you modify your detailExJob.ExpandClause statement to be assigned just "stats", then your job query should work. Moreover, you can simplify your detail level object to omit the expand clause altogether since you specified stats in the select clause.
Related
I use datagrip as client to connect redshift and encounter a stranger issue which exhaust my whole day.
When I run my query sql the datagrip complains
[XX000] ERROR: invalid string enlargement request size 1073741823
It seemed that there dont exist a place that I can check more detail error log. And I google this error it also have very little similar question and it seemed maybe due to my field is too long which exceed the max length that redshift can accept. But actually, the story is not such for me I dont have long field, then I comment all my sql statement and re-add them incrementally to locate this issue statement.
Finally, I find the error-msg-triggered statement as below:
(
case when trunc(request_date_skip_weekend_tmp) = to_date('2022-03-21', 'YYYY-MM-DD')
then dateadd(day, 1, trunc(request_date_skip_weekend_tmp))
else request_date_skip_weekend_tmp end
)
request_date_skip_weekend,
After I change it with:
dateadd(day, 1, trunc(request_date_skip_weekend_tmp)) request_date_skip_weekend,
the error complain disappear, it is very hard for me to accept the relationship error message and the sql change, I dont know why my the former statement will trigger error complain.
I will appreciate if you can spot why the former expression error or share some knowledge about where can I fetch more detail error message to know what happened.
Your code snippet is dates and timestamps but the error is for strings. So it is likely you have identified a "trigger" and not root cause. Also since you report that the SQL is very long you could be dealing with compiler optimization changes, moving the failure. Removing a CASE can cause the compiler/optimizer to choose different structures for the query.
One experiment to try is to change the to_date() to a cast to timestamp so there are no implicit casts ('2022-03-21'::timestamp). This is unlikely the cause but it may help.
I expect you will need to post the query to get more help. How large is it? This error could be related to building a large string in the query OR could be related to the text of the query OR creating the output. This isn't a standard "string too long" message so this is something more implicit. You could post to a google doc or some other file sharing service. Just link in the question.
I primarily use CFQUERYPARAM to prevent SQL injection. Since Query-of-Queries (QoQ) does not touch the database, is there any logical reason to use CFQUERYPARAM in them? I know that values that do not match the cfsqltype and maxlength will throw an exception, but, these values should already be validated before that and display friendly messages (from a UX viewpoint).
Since Query-of-Queries (QoQ) does not touch the database, is there any logical reason to use CFQUERYPARAM in them? Actually, it does touch the database, the database that you currently have stored in memory. The data in that database could still theoretically be tampered with via some sort of injection from the user. Does that affect your physical database - no. Does that affect the use of the data within your application - yes.
You did not give any specific details but I would err on the side of caution. If ANY of the data you are using to build your query comes from the client then use cfqueryparam in them. If you can guarantee that none of the elements in your query comes from the client then I think it would be okay to not use the cfqueryparam.
As an aside, using cfqueryparam also helps optimize the query for the database although I'm not sure if that is true for query of queries. It also escapes characters for you like apostrophes.
Here is a situation where it's simpler, in my opinion.
<cfquery name="NoVisit" dbtype="query">
select chart_no, patient_name, treatment_date, pr, BillingCompareField
from BillingData
where BillingCompareField not in
(<cfqueryparam cfsqltype="cf_sql_varchar"
value="#ValueList(FinalData.FinalCompareField)#" list="yes">)
</cfquery>
The alternative would be to use QuotedValueList. However, if anything in that value list contained an apostrophe, cfqueryparam will escape it. Otherwise I would have to.
Edit starts here
Here is another example where not using query parameters causes an error.
QueryAddRow(x,2);
QuerySetCell(x,"dt",CreateDate(2001,1,1),1);
QuerySetCell(x,"dt",CreateDate(2001,1,11),2);
</cfscript>
<cfquery name="y" dbtype="query">
select * from x
<!---
where dt in (<cfqueryparam cfsqltype="cf_sql_date" value="#ValueList(x.dt)#" list="yes">)
--->
where dt in (#ValueList(x.dt)#)
</cfquery>
The code as written throws this error:
Query Of Queries runtime error.
Comparison exception while executing IN.
Unsupported Type Comparison Exception:
The IN operator does not support comparison between the following types:
Left hand side expression type = "DATE".
Right hand side expression type = "LONG".
With the query parameter, commented out above, the code executes successfully.
My application is not alerting me to a failed insert when adding a record to a MongoDB collection with a unique index...
$dm->flush()
... does not complain. I'm trying to figure out what the array parameter to flush should look like to see if that helps but getting nowhere. flush does not return anything on success or failure.
Any ideas on how I can verify, in my PHP/Symfony2 application, whether the insert worked without needing to query the db immediately after inserting?
Got it. Per this link, must provide array("safe" => true) as a parameter to the write operation.
$dm->flush(array('safe'=>true));
So when using the code above and trying to insert into a unique index an exception will be thrown.
I get the following error when running an sqr report on DB2:
SQL0100W - No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000
The sql in question runs correctly when I paste it into RapidSQL, replacing the parameters. The sql in question is an insert-select. No rows are returned by the select, and this is fine... I expect the report to be blank for my parameters.
Any idea how I can get around this?
DB2 returns always an SQL0100 warning (this is a warning, not an error - errors would have negative values) when no rows are returned. That's the way it is.
I don't know peoplesoft at all - so I can't give you any pointers with that. Back when I was programming for DB2 we ignored those SQL0100 warnings.
If SQR can't gracefully handle a NOT_FOUND SQL0100 return, then code a preliminary query to return a count of the number of rows that satisfy the conditions of the actual query. Check the result of the count in an if-then block in SQR to run the actual query if and only if the row count returned by the preceding query was not zero.
Turns out to be an environment setup issue. Got resolved with no change from me after a couple of builds....
Strange :-/
if you delete delete more than one record using logic operation like delete from tabname where columnnmae=deleterecord and columnnmae=deleterecord then they show this type error.machine an
Here’s the scenario:
You load a page, which renders based on the data in the database (call these the “assumptions”)
Someone changes the assumptions
You submit your page
ERROR!
The general pattern to solve this problem is this (right?):
In your save proc, inside a begin and commit transaction, you validate your assumptions first. If any of them changed, you should return a graceful error message, something like an XML list of the ID’s you had problems with, that you handle in the page rather than let it be handled by the default error handling infrastructure.
So my question is what is the best way to do that?
Return the xml list and an error flag in out parameters that are unset and 0 if it completes correctly?
Use an out parameter to return an error status, and the result of the proc be either the error list or the valid results of the proc?
Something else? (I should note that raiseerror would cause the proc to be in error, and get picked up by the default error handling code)
Update: I would like to return a maniplatable list of IDs that failed (I plan to highlight those cells in the application). I could return them in CSV format in RAISEERROR, but that seems dirty.
I agree - I like RAISEERROR:
-- Validate #whatever
IF #whatever >= '5'
BEGIN
RAISERROR ('Invalid value for #whatever - expected a value less than 5, but received %s.', 10, 1, #whatever)
RETURN 50000
END;
Use the RAISERROR function with the appropriate severity and/or wait level. If you use a low severity this does not necessarily cause an exception, as you contend, and with .Net at least its pretty simple to retrieve these messages. The only downside is that with the StoredProcedure command type in .Net messages are only pumped in groups of 50.
Stored procedures can return multiple result sets. Based on your update, you could also insert errored ids into a temporary table, and then at the end of your procedure select the records from that table as an additional result set you can look at.
I would do an output parameter with a message, the return will already have something which is not 0 if there is an error
also be careful with doomed transaction and check with xact_error, see Use XACT_ABORT to roll back non trapable error transactions