How to resolve PROG753 error in CICS? - db2

enter image description hereI executed nmy application in CICS region. After few maps, it throws PROG753 error and only few part of map is displayed. Can anyone help me with how to resolve this error?
I think in this image , "ITEMERR" is causing PROG753 error. Please check and give answer

You have junk in the map output area in your program. Check that:
You have cleared the output area before using it. Use MOVE LOW-VALUES to your area if it's Cobol, or equivalent statement in your language to initialize the output area to binary zeroes. Do it explicitly, do not hope that the area has been cleared for you.
The data you move to the output area does not contain unprintable characters. Check your source areas - do not rely on the declarations only, the data may be redefined. If in doubt, dump the data out (EXEC CICS ENTER can help, but start with CEDF if you can.)
Check that you haven't overwritten the attributes of your output fields.
Run your transaction under CEDF and stop before the problematic SEND MAP. Look at the FROM area. There must be junk there (unprintable characters,) find it using hexadecimal display and identify the reason of it getting there.

Related

How to replace value in txt file with powershell from GitHub

I want to build a simple script that may be useful for others as well, but I have only very basic programming knowledge and can't do it myself without learning how to write powershell scripts from scratch.
What this script is supposed to do is, open an INI file (really just a txt), look for a variable with an assigned value and replace that value from a txt hosted on GitHub, save and then run a program.
This is for the tracker list of qBittorrent, since that feature still hasn't been implemented and the only other script that I could find that does this is for linux and mac, there seem to be none for windows.
The basic idea is this:
get-content "c:\users\[user]\appdata\roaming\qbittorrent\qbittorrent.ini"
# This is where pseudo code starts
get file from "[github-link.txt]"
save file to cache # keeping it is useless as it gets updated daily
find variable "Session\AdditionalTrackers=" in qbittorrent.ini
replace value of variable with content of cached file # this is what I struggle with most when looking for example code. Everything I could find specified the exact string that needed replacing, which in this case is quite long and may change with every update of the file.
overwrite original file
launch program qbittorrent.exe
end script
Conveniently or most likely deliberately all (most) of the tracker lists on GitHub are already formatted in a way that they can be directly pasted into the file without having to worry about formatting. Example.
I can totally understand if nobody wants to do the work, but I would greatly appreciate it and possibly others that are looking for a stopgap for the lacking feature.
If this already exists, go ahead and call me an idiot and while you're at it drop a link ;)
I just found a little tool called Power Automate and it pretty much does what I was looking for. It's not quite as elegant as a single click script but it does the job. Sadly I can't share the "flow" I built because, well, there is no option for it - thanks Microsoft. So, I'll try my best to write it out.
Not quite a "solution" but pretty to close to it.
Here is the "flow":
get file from web // from github for example
read text from file // read downloaded .txt file
read text from file // read qBittorrent.ini
crop text // crop between flags in qBittorrent.ini use "Session\AdditionalTrackers=" as start and "Session\GlobalMaxRatio=" as end and save to cropVar2
crop text // crop before flag use "Session\AdditionalTrackers=" as flag and save to cropVar1
crop text // crop after flag use cropVar2 as flag and save to cropVar3
replace text // replace cropVar2 with content of downloaded file and save to cropVar2
write text to file // write cropVar1,cropVar2,cropVar3
end flow
Keep in mind that any changes to the qBittorrent.ini may change the order of the entries. Which means you have to check if it's still correct after every update and after every change you make in the options. This is a massive cludge after all...
You can input fail saves so that you won't break anything if the order changed.

When trying to save pgAdmin result to a file (TXT) the result is modified

When I launch my query into pgAdmin 4 v5's Query Tool I get this type of data representation (this is also what I would like to get in my export file).
Unfortunately this information is transformed when saving it to a .TXT file by clicking the following button and saving it as indicated in the subsequent image.
As you can see below, after double-clicking on the saved TXT document, it added '.0' and wrapped my long character by indicating 'e+29' up to a certain row.
Can you please indicate me how to remove those transformations ?
All,
I found out the above problem is linked with the version of pgAdmin I was using, pgAdmin 4 v5 precisely.
After downloading pgAdmin 4 v6.4 the problem doesn't appear anymore.
I consider this thus as fixed, even if the cause of the problem remains unknown to me.
Thanks for your help.
Brieuc

Troubleshooting "no writeable tags set" error

I'm trying to (ultimately) modify a batch of files but getting stuck in the basics as I try to modify a single file before running a batch command.
If someone could help me troubleshoot the command I'm inputting, that would be fantastic. I'm sure it's something very simple.
Thanks a lot for any help you can provide!
Here's the abbreviated image exif data:
-ExifToolVersion=10.10
-FileName=2018_11_13_1.jpeg
-Directory=.
-FileSize=2.8 MB
-FileModifyDate=2019:07:12 15:40:38-07:00
-FileAccessDate=2019:07:12 15:40:38-07:00
-FileInodeChangeDate=2019:07:23 10:38:02-07:00
-FilePermissions=rw-rw-r--
-FileType=JPEG
-FileTypeExtension=jpg
-MIMEType=image/jpeg
[...]
-ModifyDate=2018:11:13 12:00:53
[...]
-DateTimeOriginal=2018:11:13 12:00:53
-CreateDate=2018:11:13 12:00:53
My current input is: exiftool "-FileModifyDate<$filename00000" ./2018_11_13_1.jpeg
And the error message is:
Warning: No writable tags set from 2018_11_13_1.jpeg
0 image files updated
1 image files unchanged
And the exif data is, of course, unchanged.
I've confirmed that I can write a value to this tag, so there's definitely something going wrong in pulling from the filename.
( Continued from How to compensate for incomplete date/time info in filename )
The problem here is that you are trying to write from a tag named filename00000. If you check the example in the other post, you will see that there is a space after Filename. This sets it apart so that exiftool knows which is a tag name and which is other data.
There is possibly an additional problem here, though. Your filename has an extra number that is not the date. When exiftool tries to write the time stamp from the filename, it is going to end up with a value of "2018:11:13 10:00:00", which might become especially problematic if that last digit hits a value of 3 or more, resulting in a timestamp of "2018:11:13 30:00:00".
I would suggest using exiftool's Advanced Formatting Feature (a fancy way of saying that you can use perl code in the command) to strip the excess data. Something like
exiftool "-FileModifyDate<${filename;s/^(.*\d{4}_\d\d_\d\d).*/$1/} 000000" ./2018_11_13_1.jpeg
Though take note, if the filenames are in any other format, then it would require a different command.

Store row numbers which are causing "error"

I have to retrieve certain information from urls. For this I have to enter text into fields of the url. I am using GET operation for this. I have to modify the text to replace spaces with "%20". Some times the text(which is taken from the database) is badly formed. I would like to know the row numbers so I can manually change the text for such rows in the database and run it again. I have tried to use the logs and errors section but with little luck. Does anybody have an idea of how to do this?
First shot: Output bad urls on the console
So far, I came up with the following job design for your problem:
The trick is to catch the exceptions of the tHttpRequest component and print the necessary details on the console. For this example, I included the line number, the exception message and the URL that produced the exception.
Output (I couldn't reproduce your "Illegal character error", so I took a different one):
Second shot: Output to a file
If you really need to output the line numbers to a file, things get a little more complicated.
Instead of printing the info straight onto the console, we collect all line numbers into a context variable of type (Java) List inside the tJavaFlex. After the usual URL processing (which I have left out from the job design to keep the example small), we iterate over the Java List
and save it into a tHashOutput, so that we can finally write to a file.
We cannot directly write to the file in the tLoop section, since the Iterate flow would lead to the situation the the tFileInputDelimited would be opened several times. If "Append" was disabled, only the last bad URL line number would finally appear in the output file. If "Append" was enabled, you would get the full list of line numbers after the very first job run - but you would append every time you run the job, making the list longer and longer. Workarounds would be to use a runtime-dependent file name (e.g. timestamp) or to delete the file at the beginning of the job run. I chose the third option, that overwrites the file every time we run the job. Feel free to chose among those options the one which suits your use case best.
Details
The tHashOutput/tHashInput components are not visible on default, but must be enabled first to show up: https://www.talendforge.org/forum/viewtopic.php?pid=107249#p107249
Context variable:
INIT:
tJavaFlex "catch errors", end code:
tLoop:
tFixedFlowInput "badURL":
tHashOutput:
Needs to have "Append" enabled.

How can I make log4perl output easier to read?

When using log4perl, the debug log layout that I'm using is :
log4perl.appender.D10.layout=PatternLayout
log4perl.appender.D10.layout.ConversionPattern=%d [pid=%P] %p %F{1} (%L) %M %m%n
log4perl.appender.D10.Filter = DebugAndUp
This produces very verbose debug logs, for example:
2008/11/26 11:57:28 [pid=25485] DEBUG SomeModule.pm (331) functions::SomeModule::Test Test XXX was successfull
2008/11/26 11:57:29 [pid=25485] ERROR SomeOtherUnrelatedModule.pm (99999) functions::SomeModule::AnotherTest AnotherTest YYY has faled
This works great, and provides excellent debugging data.
However, each line of the debug log contains different function names, pid length, etc. This makes each line layout differently, and makes reading debug logs much harder than it needs to be.
Is there a way in log4perl to format the line so that the debugging metadata (everything up until the actual log message) be padded at the end with spaces/tabs, and have the actual message start at the same column of text?
You can pad the single fields that make up your entries. For example [pid=%5P] will always give you at least 5 characters for the PID.
The "Quantify Placeholders" section in the docs for Log::Log4perl::Layout gives more details.
There are a couple of ways to go with this, although you have to figure out which one works better for your situation:
Use a different appender if you are working live. Have that appender use a pattern that shows only the information you want. If you're working in a single process, for instance, your alternate appender might leave off the PID and the timestamp. You might only need the file name and line number.
Use %n to put newlines in the right place. That makes it multi-line output that is slightly harder to parse later, but you can choose another sequence for the input record separator (say, a literal "[EOL]") to make it easy to read entry-by-entry.
Log to a database instead of a file. For your reports, select just the columns you want to inspect.
Log everything, but write a filter to go through the log file ad-hoc to display just the parts that you want to see, such as only the debugging messages, the entries between certain times, only the entries involving a file, and so on.