jcl error in ibm mainframe contest 2020 jcl1 problem - jcl

//setvar secret=(NEW,CATLG, DELETE)
This line giving me error unexpected left punctuation used.
What should I change ?

Part of the point of the Master The Mainframe contest is for you to learn how to find these answers for yourself. Having someone tell you the answer is probably...
Your statement is syntactically invalid, at a guess it should be //SETVAR SET SECRET=(NEW,CATLG,DELETE)
...doesn't really teach you how to debug your JCL. It is way beyond the scope of an answer on this site to explain the syntax of JCL. IBM's JCL Reference will give you syntax.
The thing to do is look up the error message you received. IBM error messages start with a code such as IEF452I followed by a message. Often these messages will tell you exactly what went wrong, but not why, which can be frustrating. In this example, the documentation tells you there was an error, but not what the error is. That's likely in another message.
Again, one point of the Master The Mainframe contest is to give you practice in looking these things up so you can solve problems on your own.

Related

Catching errors in a Perl REST API

I am coding a REST API using Perl/Mojolicious
Sometimes when I want to throw an error, for example "Invalid token" I store the error on a variable called "Object->lastError" and then I render the JSON response with the error message/code.
However it gets tedious to do it that way after a while. I was wondering if there is a better way to do this I was considering just dying and catching the die error with$SIG{__DIE__}
Any suggestions?
Also, I am not using any logger yet but I would like to log those errors
On the question of logging, see: http://search.cpan.org/~garu/MojoX-Log-Log4perl-0.10/lib/MojoX/Log/Log4perl.pm Log4perl is pretty much best practice in the wider Perl world.
Without knowing a lot of deep detail about the application, I far prefer your 'tedious' method which will [hopefully] provide some information on the receiving side of the API, rather than a crash and burn with $SIG{__DIE__}.
Hope that helps, a bit, anyway!

Laravel Mail Queue Infinite Loop on Exception

Hello fellow programmers, I wish everyone a good morning.
The Situation
Laravel is great. Laravel Mail queues and the beanstalkd integration is great. It took me almost no time to get everything working. The sun is shining and its not raining. Its awesome.
Except when an exception is thrown while sending an email. Then thise mail is processed again and again and again and the exception is also thrown again and again and again.
Infinite loop.
I think I wouldnt even notice this if I wouldn't have seeded the database with invalid data. Validation usually would have taken care of that, that emails like 361FlorindaMatthäi#gmail.com dont end up with the folowing exception:
[Swift_RfcComplianceException]
Address in mailbox given [361FlorindaMatthäi#gmail.com] does not
comply with RFC 2822, 3.6.2.
But what validation wouldnt have taken care for is for example, when my mandrill account reaches its limits or my server looses internet connection, whatever. An Exception sends it into an infinite loop.
In the world where the sun is shining and everything is great the job has to be marked as buried or suspended and the next email should be processed. An infinite loop with an invalid email address is not great.
Basicly your application doesnt send out any emails anymore. This guy has roughly the same issue.
How can I fix this? Has anyone else encountered this Error?
Any Help is much appreciated.
You just need to travel Laravel how many times to try a specific job, before deciding it has failed:
php artisan queue:daemon --tries=3
This way, it will stop processing that specific job after 3 tries.
The hard part of any queue-based system is dealing with the errors, I've run tens of millions of jobs through BeanstalkD and many more through other systems like SQS.
With this Swift_RfcComplianceException exception it's clear that the job will never be able to succeed, and so trying it again would be futile.
Some other problems might be able to be recovered, but in either event, you have to wrap the code in a try/catch block and do what you can.
Since there is no way to 'fix' this particular issue, I would record what happened (the name of the exception and any message, and the data) to a log to check on, and then delete or bury the job. If you store the job-id in the log when it is buried it, you can go back and delete or kick that particular job again later - this would be after being able to change what happens to the job (rather than having it fail again).

Is there a way to include an error description string in a crash report?

Is there a way to include an error description string in a crash report? I'm talking about crash reports that are sent via iTunes Connect. It would be nice if I could log the reason for a crash.
This question asks about including console output, but I'd be happy just to able to include an error description string, without resorting to a 3rd party library.
EDIT:
There are errors I can detect where it's unwise or impossible to attempt to recover, because the program is probably in a corrupt state. I'd rather not catch these exceptions so that I get automatic error reporting with a stack trace. Sometimes, there is extra information that I'd like to log (that cannot be deduced from the stack trace), such as the current state of a state machine.
How about a third party crash reporter? I've integrated QuincyKit, and even found a way to annotate them.
Also, when the app actively begs for the crash report to be sent to the support, you get much more of them. The things you learn...
I've searched and searched. There seems to no way to annotate a pending crash report. If someone comes up with an answer, I'll change the accepted answer.

Tips for finding things in your program that are broken that you don't know about?

I was working on something for a client today when I found a way to break some functionality in our program.
(The code is really legacy code, it's been in development for about 10 years and I've only been working here for about a year.)
It didn't cause an error, or cause the program to crash, but if a user was using the program and duplicated the behavior I'm pretty sure they'd be holding up their "WTF?" flag.
In our program we have named fields (textboxes) and static text (labels) that can be linked with the textboxes. When the textbox is not filled in the label(s) that were linked to them disappear.
The functionality that I broke was, when you change the name of a textbox that already has one label or more linked to it, and save the file, without re-associating the one or more labels associated with the textbox, the formerly-associated labels appear when the textbox is blank.
Now my thinking on the matter is that a simple observer pattern could have solved this problem in the first place, but then I didn't write the code.
I was thinking that if I could dig up more situations like this with the guys in my shop, that maybe I could talk them into considering unit testing, decoupling, applying patterns where they are called for and the like.
So for this reason I was wondering if anyone had any tips for finding broken (but not error causing) functionality in any sort of app (web-based, desktop, etc...)
For an app to fail usability, it has to have a defined set of expected behaviors.
"Is this textbox SUPPOSED to do nothing when the enter key is pressed?" Maybe it is, maybe it isn't. I've seen apps where a tester/reviewer reports something that they ASSUME should work another way, when in actuality the client specifically asked that they DON'T want the form submitted on a return key press, but only a submit button click.
So basically you have to define proper behaviour before you can determine incorrect behavior.
Hire some testers.
If it has an interface, then one of my favorite unconventional test is putting 5-10 year old children in front of it. You'd be surprised what they can come up with (especially the younger ones). While this may sound like a joke, it isn't -- it really works, because children don't have the mindset of only going through "mindset" paths.
And yeah, children are the experts in "breaking things" xP.
Code inspections, i.e. reading the source code: if you had taken time to read/inspect the source code, looking for "smells" or even just looking for code whose behaviour you don't immediately understand and agree with, you might have been holding up your "WTF?" flag too.
Test, test, test.
Do unexpected things. Start doing one task and switch another to see if anything goes haywire. Use the back button when you're not supposed to. Open it in two windows. Let it time out.
Test in all browsers, especially IE.
You can find database connections/sessions aren't released by:
working out the minimum number of connections you need to do something
setting resource limits to that minimum number
ensuring one "run" of the scenario that should use exactly that number (and release it afterwards)
then run it again a few times... do you run out of connections?
I used to work in a company where programmers regularly used to forget to de-allocate db connections. The standard answer was to reduce the resource to a minimum to see if there's a leak - and to try to work out where it is by restarting the system and running different scenarios repeatedly.
The first hour of code review, with the first reviewer, will do the most to find quality problems. But here's the thing: You don't need to convince people of quality problems. You need to convince them of the value of fixing bugs, and of rewriting only when the present quality absolutely justifies it.
I've dealt with some seriously bad code in my time. But you can't just rewrite. You need a spec before you can even tell if the rewrite is an improvement.
Sometimes, you have to infer the spec from the code and then check it against some human somewhere. But by the time you've done that, you understand the code as written and are now better prepared to repair than to rewrite -- most of the time.
Repair proceeds by a process of small behavior-preserving modifications that render the spec more clear in the code. Then, when you find something that looks wrong, you don't just change it. You ask around until you find the person responsible for that decision, and you get them to show you where in the spec it says that behavior X is correct. (This conversation can take many forms.) If you're lucky, they'll tell you that behavior X is in fact incorrect, and then you've earned your pay.
assert()
Also unit testing with coverage analysis.
This is particular to the Visual Studio IDE, although it probably also applies to others:
During testing, always at some point run in the debugger with "Break when an exception is thrown" turned on.
This can often help expose exceptions which are incorrectly being silently caught and which represent bugs, but otherwise may not be evident.
Code reviews should always also include reviews of the unit test code.
The problem is that with ad-hoc testing it's impossible to know how much or how well a developer has tested their code. So, you're at the mercy of different developers definition of the word "done".
If you include reviews of the unit test code at the same time you review the production code you should have a good idea of whether the code is really complete; in that "complete" includes "tested". Not just "Hey, I'll throw it over the wall to the testers!".

Alternative to NSSetUncaughtExceptionHandler on iPhone

I'm trying to make a general error handler for an iPhone app that brings the user to a recovery screen whenever any general error is thrown in the application without putting a try/catch block around every single method in the application.
Using NSSetUncaughtExceptionHandler doesn't work because the application terminates after the handler is run.
Is there any way to change this behavior, or use any other handler that will catch exceptions in general and not cause the application to exit afterward?
And please, no non-answers about whether it's a good or bad idea.
The original poster has probably solved his problem by now. However, for anyone who comes across this in the future...
Matt Gallagher wrote an excellent post on catching unhandled exceptions and signals a few months after this question was posted. I find it to be much more informative than the answer referenced above by Scott.
In particular, Matt's post describes how to attempt a recovery (if appropriate) that allows your app to keep running, and even displays a UIAlertView with error information if you want (hint: it involves creating a new run loop).
This was answered here. You can read more about the responder chain and catching the exceptions here. The write up from 1 is really good and explains how to deal with what you are doing.