I am getting ASRA abend while trying to read from a TSQ. Will ASRA occur if we try to read from a TSQ that is already deleted? what all could be the possible reasons?
ASRA is sort of a catch-all error that says CICS identified a program check state and ended your transaction for you. It could be anything. You can get more detail from the CICS started task and it's logs, or from your whatever ABEND reporting product your installation has installed.
However, if you are getting the ASRA while you are doing a READQ TS with the INTO(varname) option, make sure you own the storage of varname and that the length is enough to fit the largest possible record on the queue.
Also, if you use the length option, make sure that you have it set correctly. If you ask for 32k bytes from the TS queue into a 100 byte area, you will get an ASRA.
But all of the above is only one possible reason for it, you really need to determine what sort of ASRA you are getting.
If the Temporary Storage Queue is already deleted, you shouldn't get an ASRA but a QIDERR condition, which if not handled will give you a different abend, an AEYH abend.
ASRA's are just CICS codes for an S0C* abend, which I am going to presume in this case would be an S0C4 or protection exception. A protection exception happens when you try to write to (or sometimes read from) storage that you don't have permission to.
Related
I realize this is a bad question, but I don't know where else to turn.
can someone point me to where I can find the list of reports failure codes for IBM? I've tried searching for it in the IBM documentation, and in general google search, but this particular error is unique and I've never seen it before.
I'm trying to find out what code 262148 means.
Background:
I built a datastage job that has:
ORACLE CONNECTOR --> TRANSFORMER -> HIERARCHICAL DATA
The intent is to pull data from a ORACLE table, and output the response of the select statement into a JSON file. I'm using the HIERARCHICAL stage to set it. When tested in the stage, no problems, I see the JSON output.
However, when I run the job, it squawks:
reports failure code 262148
then the job aborts. There are no warnings, no signs, no errors prior to this line.
Until I know what it is, I can't troubleshoot.
If someone can point me to where the list of failure codes are, i can proceed.
Thanks!
can someone point me to where I can find the list of reports failure codes for IBM?
Here you go:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/rzahb/rzahbsrclist.htm
While this list does not list your specific error code, it does categorize many other codes, and explains how the code breakdown works. While this list is not specifically for DataStage, in my experience IBM standards are generally compatible across different products. In this list every code that starts with a 2 is a disk failure, so maybe run a disk checker. That's the best I've got as far as error codes.
Without knowledge of the inner workings of the product, there is not much more you can do beyond checking system health in general (especially disk, network and permissions in this case). Personally, I prefer to go get internal knowledge whenever exterior knowledge proves insufficient. I would start with a network capture, as I'm sure there's a socket involved in the connection between the layers. Compare the network capture from when the select statement is run from the hierarchical from one captured when run from the job. There may be clues in there, like reset or refused connections.
In Netlogo Behavior space, if one of the runs is throwing an error, how to skip that run and ask netlogo to proceed with the next run?
Is it even possible?
From the docs,
If you do want spreadsheet output, note that if anything interrupts
the experiment, such as a runtime error, running out of memory, or a
crash or power outage, no spreadsheet results will be written. For
long experiments, you may want to also enable table format as a
precaution so that if something happens and you get no spreadsheet
output you'll at least get partial table output.)
So, I'll assume this isn't possible and the best way to fix this would be to handle the situation where your code has an error. Alternatively, you could use the carefully command to handle the error messages.
Is watchman capable of posting to the configured command, why it's sending a file to that command?
For example:
a file is new to a folder would possibly be a FILE_CREATE flag;
a file that is deleted would send to the command the FILE_DELETE flag;
a file that's modified would send a FILE_MOD flag etc.
Perhaps even when a folder gets deleted (and therefore the files thereunder) would send a FOLDER_DELETE parameter naming the folder, as well as a FILE_DELETE to the files thereunder / FOLDER_DELETE to the folders thereunder
Is there such a thing?
No, it can't do that. The reasons why are pretty fundamental to its design.
The TL;DR is that it is a lot more complicated than you might think for a client to correctly process those individual events and in almost all cases you don't really want them.
Most file watching systems are abstractions that simply translate from the system specific notification information into some common form. They don't deal, either very well or at all, with the notification queue being overflown and don't provide their clients with a way to reliably respond to that situation.
In addition to this, the filesystem can be subject to many and varied changes in a very short amount of time, and from multiple concurrent threads or processes. This makes this area extremely prone to TOCTOU issues that are difficult to manage. For example, creating and writing to a file typically results in a series of notifications about the file and its containing directory. If the file is removed immediately after this sequence (perhaps it was an intermediate file in a build step), by the time you see the notifications about the file creation there is a good chance that it has already been deleted.
Watchman takes the input stream of notifications and feeds it into its internal model of the filesystem: an ordered list of observed files. Each time a notification is received watchman treats it as a signal that it should go and look at the file that was reported as changed and then move the entry for that file to the most recent end of the ordered list.
When you ask Watchman for information about the filesystem it is possible or even likely that there may be pending notifications still due from the kernel. To minimize TOCTOU and ensure that its state is current, watchman generates a synchronization cookie and waits for that notification to be visible before it responds to your query.
The combination of the two things above mean that watchman result data has two important properties:
You are guaranteed to have have observed all notifications that happened before your query
You receive the most recent information for any given file only once in your query results (the change results are coalesced together)
Let's talk about the overflow case. If your system is unable to keep up with the rate at which files are changing (eg: you have a big project and are very quickly creating and deleting files and the system is heavily loaded), the OS can't fit all of the pending notifications in the buffer resources allocated to the watches. When that happens, it blows those buffers and sends an overflow signal. What that means is that the client of the watching API has missed some number of events and is no longer in synchronization with the state of the filesystem. If that client is maintains state about the filesystem it is no longer valid.
Watchman addresses this situation by re-examining the watched tree and synthetically marking all of the files as being changed. This causes the next query from the client to see everything in the tree. We call this a fresh instance result set because it is the same view you'd get when you are querying for the first time. We set a flag in the result so that the client knows that this has happened and can take appropriate steps to repair its own state. You can configure this behavior through query parameters.
In these fresh instance result sets, we don't know whether any given file really changed or not (it's possible that it changed in such a way that we can't detect via lstat) and even if we can see that its metadata changed, we don't know the cause of that change.
There can be multiple events that contribute to why a given file appears in the results delivered by watchman. We don't them record them individually because we can't track them with unbounded history; imagine a file that is incrementally being written once every second all day long. Do we keep 86400 change entries for it per day on hand and deliver those to our clients? What if there are hundreds of thousands of files like this? We'd have to truncate that data, and at that point the loss in the data reduces how well you can reason about it.
At the end of all of this, it is very rare for a client to do much more than try to read a file or look at its metadata, and generally speaking, they want to do that only when the file has stopped changing. For this use case, watchman-wait, watchman-make and trigger all have the concept of a settle period that causes the change notifications to be delayed in delivery until after the filesystem has stopped changing.
In our webapp, we have lots of queries running. Most of them reading data but some update queries with high priority might come. Since, we'd like to cancel read queries but when using KILL, I'd like the read query to return certain dataset result or execution result upon receiving cancel.
My intention is to mimic the behavior of signal in C programs for which a signal handler is invoked upon receiving a kill signal.
Is there any method to define an asynchrnous KILL signal handler for SPs?
This is not a fully tested answer. But it is a bit more than just a comment.
One is to have dirty read (with nolock).
This part is tested I do this all time.
Build a large scalable app you need to resort to this and manage it.
A dirty read will not block an update.
You can get that - a dirty read.
A lot of people think a dirty read may get corrupt data.
If you are updating smith to johnson the select is not going to get smison.
The select is going to get smith and it will be immediately stale.
But how is that worse then taking a read lock?
The read get smith and blocks the update.
Once the read locks are cleared it is updated.
I would contend that blocking an update is also stale data.
If you are using reader I think you could pass the same cancellation token to each select and then just cancel the one token.
But if may not process the CancellationToken until it read the row so it may not cancel a query a long running query that has not yet returned any rows.
DbDataReader.ReadAsync Method (CancellationToken)
Or if you are not using reader look at
SqlCommand.Cancel
As far as getting cancel to return alternate data. I doubt SQL is going to do that.
I'm using SQL Server Express 2008 and I'm doing a bulk insert of data. I'd like to have more verbose error messages, ideally printing the data that failed to be inserted. Is that possible?
It is possible, but it can require a lot of effort to to do this--I recall working on a subsystem for a few days before I got it to do everything it needed to do. I believe this is one of the (few but still too many) places where, upon hitting an error, SQL will return two (2) error messages back-to-back, the second message is vague and indistinct, and all the error handling functions can only access info pertaining to that second lame message, and not the first one where the real info is. I don't have the code in front of me, but the logic was something like:
Use the "errorfile" option on BULK INSERT to generate an error file IF the bulk insert fails
TRY/CATCH the bulk insert call, and carefully check the returned error number
If the error is the appropriate type, open and read the contents of the file to determine what went wrong where, and build your error message around that
Awkward as anything, but ultimately it worked out pretty well. So long as the drive+path+filename you were inserting from didn't exceed 128 characters (in SQL 2005, and I just bet they didn't fix that in 2008.) I do not count Bulk Insert as one of my favorite commands.