How read data in PSW (program status word) in IBM mainframe - operating-system

The message is:
I know 0C7 is a data error, but how can i read PSW abend code?

The PSW is not and does not contain an abend-code, it shows the processor state at the time of the abend.
The PSW in your example has 8 bytes, so it's in ESA/390 format (in 64-bit-mode the PSW is 16 bytes), so I' focus on that case.
Usually the only thing that matters when investigating a 0C7 abend is the last 31 bits that contain the NSI - the next sequential instruction - pointing to the machine-statement after the statement causing the exception. In your case would be the address 60009260. You'd have to investigate the statement before that address to see what data it is using and then investigate why it is not in the correct format.
On the other hand the shown PSW starts with FF - which should never be the case (see below), so the value shown is probably corrupted and the NSI-value should also be treated with a certain amount of suspicion.
To answer the question as stated here's the complete PSW-layout (all offsets and lengths are for bits):
offset 0, length 1: always 0 (1in your example!!!)
offset 1, length 1: Program Event Recording flag
offset 2, length 3: always 000
offset 5, length 1: dynamic address translation flag
offset 6, length 1: IO-interruptions flag
offset 7, length 1: external-interruptions flag
offset 8, length 4: PSW-Key (compared with storage-key for storage protection)
offset 12, length 1: always 1
offset 13, length 1: machine-check-interruptions flag
offset 14, length 1: wait-state flag
offset 15, length 1: problem-state flag
offset 16, length 2: address-space control
offset 18, length 2: condition code (set e.g. by compare-instructions)
offset 20, length 4: program mask (set e.g. by arithmetic instructions on overflow)
offset 24, length 1: IBM-reserved
offset 25, length 6: always 000000
offset 31, length 1: extended-addressing-mode flag (1 means 64-bit-addressing)
offset 32, length 1: basic.addressing-mode flag (1 means 31-bit-addressing as opposed to 24-bit)
offset 33, length 31: next sequential instruction
Further details on the meaning of the various flags as well as the layout of the 64-bit PSW can be found in the "Control"-chapter of the "z/Architecture Principles of Operation" manual.

Related

PDF417 decode and generate the same barcode using Swift

I have the following example of PDF417 barcode:
which can be decoded with online tool like zxing
as the following result: 5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e�¼ô���������¬‚C`Ìe%�æ‹�ÀsõbÿG)=‡x‚�qÀ1ß–[FzùŽûVû�É�üæ±RNI�Y[.H»Eàó¼åñüì²�tØ¿ªWp…Ã�{�Õ*
or online-qrcode-generator
as 5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e~|~~~~~~~~~~d~C`~e%~~~~;To~B~{~dj9v~~Z[Xm~~"HP3~~LH~~~O~"S~~,~~~~~~~k1~~~u~Iw}SQ~fqX4~mbc_ (I don't know which encoding is used to encode this)
The first part of the encoded key that contains barcode is always known and it is 5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e
The second part of it can be decoded from the base64string and it always contains 88 bytes. In my case it is:
Frz0DAAAAAAAAAAArIJDYMxlJQDmiwHAc/Vi/0cpPYd4ghlxwDHflltGevmO+1b7GckT/OZ/sVJOSRpZWy5Iu0Xg87zl8fzssg502L+qV3CFwxZ/ewjVKg==
I'm using Swift on iOS device to generate this PDF417 barcode by decoding the provided base64 string like this:
let base64Str = "Frz0DAAAAAAAAAAArIJDYMxlJQDmiwHAc/Vi/0cpPYd4ghlxwDHflltGevmO+1b7GckT/OZ/sVJOSRpZWy5Iu0Xg87zl8fzssg502L+qV3CFwxZ/ewjVKg=="
let knownKey = "5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e"
let decodedData = Data(base64Encoded: base64Str.replacingOccurrences(of: "-", with: "+")
.replacingOccurrences(of: "_", with: "/"))
var codeData=knownKey.data(using: String.Encoding.ascii)
codeData?.append(decodedData)
let image = generatePDF417Barcode(from: codeData!)
let imageView = UIImageView(image: image!)
//the function to generate PDF417 UIMAGE from parsed Data
func generatePDF417Barcode(from codeData: Data) -> UIImage? {
if let filter = CIFilter(name: "CIPDF417BarcodeGenerator") {
filter.setValue(codeData, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
But I always get the wrong barcodes generated. It can be seen visually.
Please help me correct the code to get the same result as the first barcode image.
I also have the another example of barcode:
The first part of key is the same but it's second part is known as int8 byte array and I also don't have an idea how to generate the PDF417 barcode from it (with prepended key) correctly.
Here's how I try:
let knownKey = "5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e"
let secretArray: [Int8] = [22, 124, 24, 12, 0, 0, 0, 0, 0, 0, 0, 0, 100, 127, 67, 96, -52, 101, 37, 0, -85, -123, 1, -64, 111, -28, 66, -27, 123, -25, 100, 106, 57, 118, -4, 16, 90, 91, 88, 109, -105, 126, 34, 72, 80, 51, -116, 28, 76, 72, -37, -24, -93, 79, -115, 34, 83, 18, -61, 44, -12, -13, -8, -59, -107, -9, -128, 107, 49, -50, 126, 13, -59, 50, -24, -43, 127, 81, -85, 102, 113, 88, 52, -60, 109, 98, 99, 95]
let secretUInt8 = secretArray.map { UInt8(bitPattern: $0) }
let secretData = Data(secretUInt8)
let keyArray: [UInt8] = Array(knownKey.utf8)
var keyData = Data(keyArray)
keyData.append(secretData)
let image = generatePDF417Barcode(from: keyData!)
let imageView = UIImageView(image: image!)
There are a lot of things going on here. Gereon is correct that there are a lot of parameters. Choosing different parameters can lead to very different bar codes that decode identically. Your current barcode is "correct" (though a bit messy due to an Apple bug). It's just different.
I'll start with the short answer of how to make your data match the barcode you have. Then I'll walk through what you should probably actually do, and finally I'll get to the details of why.
First, here's the code you're looking for (but probably not the code you want, unless you have to match this barcode):
filter.setValue(codeData, forKey: "inputMessage")
filter.setValue(3, forKey: "inputCompactionMode") // This is good (and the big difference)
filter.setValue(5, forKey: "inputDataColumns") // This is fine, but probably unneeded
filter.setValue(0, forKey: "inputCorrectionLevel") // This is bad
PDF 417 defines several "compaction modes" to let it pack a truly impressive amount of information into a very small space while still offering excellent error detection and correction, and handling a lot of real-world scanning concerns. The default compaction mode only supports Latin text and basic punctuation. (It compacts even more if you only use uppercase Latin letters and space.) The first part of your string can be stored with text compaction, but the rest can't, so it has to switch to byte compaction.
Core Image actually does this switch shockingly badly by default (I opened FB9032718 to track). Rather than encoding in text and then switching to bytes, or just doing it all in bytes, it switches to bytes over and over again unnecessarily.
There's no way for you to configure multiple compaction methods, but you can just set it to byte, which is what value 3 is. And that's also how your source is doing it.
The second difference is the number of data columns, which drive how wide the output is. Your source is using 5, but Core Image is choosing 6 based on its default rules (which aren't fully documented).
Finally, your source has set the error correction level to 0, which is not recommended. For a message of this size, the minimum recommended error correction level is 3, which is what Core Image chooses by default.
If you just want a good barcode, and don't have to match this input, my recommendation would be to set inputCompactionMode to 3, and leave the rest as defaults. If you want a different aspect ratio, I'd use inputPreferredAspectRatio rather than modifying the number of data columns directly.
You may want to stop reading now. This was a very enjoyable puzzle to spend the morning on, so I'm going to dump a lot of details here.
If you want a deep dive into how this format works, I don't know anything currently available other than the ISO 15438 Spec, which will cost you around US$200. But there used to be some pages at GeoCities that explained a lot of this, and they're still available through the Wayback Machine.
There also aren't a lot of tools for decoding this stuff on the command line, but pdf417decode does a reasonable job. I'll use output from it to explain how I knew all the values.
The last tool you need is a way to turn jpeg output into black-and-white pbm files so that pdf417decode can read them. For that, I use the following (after installing netpbm):
cat /tmp/barcode.jpeg | jpegtopnm | ppmtopgm | pamthreshold | pamtopnm > new.pbm && ./pdf417decode -c -e new.pbm
With that, let's decode the first three rows of your existing barcode (with my commentary to the side). Everywhere you see "function output," that means this value is the output of some function that takes the other thing as the input:
0 7f54 0x02030000 (0) // Left marker
0 6a38 0x00000007 (7) // Number of rows function output
0 218c 0x00000076 (118) // Total number of non-error correcting codewords
0 0211 0x00000385 (901) // Latch to Byte Compaction mode
0 68cf 0x00000059 (89) // Data
0 18ec 0x0000021c (540)
0 02e7 0x00000330 (816)
0 753c 0x00000004 (4) // Number of columns function output
0 7e8a 0x00030001 (1) // Right marker
1 7f54 0x02030000 (0) // Left marker
1 7520 0x00010002 (2) // Security Level function output
1 704a 0x00010334 (820) // Data
1 31f2 0x000101a7 (423)
1 507b 0x000100c9 (201)
1 5e5f 0x00010319 (793)
1 6cf3 0x00010176 (374)
1 7d47 0x00010007 (7) // Number of rows function output
1 7e8a 0x00030001 (1) // Right marker
2 7f54 0x02030000 (0) // Left marker
2 6a7e 0x00020004 (4) // Number of columns function output
2 0fb2 0x0002037a (890) // Data
2 6dfa 0x000200d9 (217)
2 5b3e 0x000200bc (188)
2 3bbc 0x00020180 (384)
2 5e0b 0x00020268 (616)
2 29e0 0x00020002 (2) // Security Level function output
2 7e8a 0x00030001 (1) // Right marker
The next 3 lines will continue this pattern of function outputs. Note that the same information is encoded on the left and right, but in a different order. The system has a lot of redundancy, and can detect that it's seeing a mirror image of the barcode.
We don't care about the number of rows for this purpose, but given a current row of n and a total number of rows of N, the function is:
30 * (n/3) + ((N-1)/3)
Where / always means "integer, truncating division." Given there are 24 rows, on row 0, this is 0 + (24-1)/3 = 7.
The security level function's output is 2. Given a security level of e, the function is:
30 * (n/3) + 3*e + (N-1) % 3
=> 0 + 3*e + (23%3) = 2
=> 3*e + 2 = 2
=> 3*e = 0
=> e = 0
Finally, the number of columns can just be counted off in the output. For completeness, given a number of columns c, the function is:
30 * (n/3) + (c - 1)
=> 0 + c - 1 = 4
=> c = 5
If you look at the Data lines, you'll notice that they don't match your input data at all. That's because they have a complex encoding that I won't detail here. But for Byte compaction, you can think of it as similar to Base64 encoding, but instead of 64, it's Base900. Where Base64 encodes 3 bytes of data into 4 characters, Base900 encodes 6 bytes of data into 5 codewords.
In the end, all these codewords get converted to symbols (actual lines and spaces). Which symbol is used depends on the line. Lines divisible by 3 use one symbol set, the lines after use a second, and the lines after that use a third. So the same codewords will look completely different on line 7 than on line 8.
Taken together, all these things make it very difficult to look at a barcode and decide how "different" it is from another barcode in terms of content. You just have to decode them and see what's going on.
CIPDF417BarcodeGenerator has a few more input parameters besides inputMessage that can have an influence on how the generated barcode looks - see the documentation. Visual inspection/comparison of two codes only makes sense when you know that all these parameters, most importantly inputCorrectionLevel were equal for both generators.
So, instead of a visual comparison, simply try decoding the barcodes using one of the many scanner apps out there, and compare the decoded bytes.
For your second example, try this:
// ...
var keyData = knownKey.data(using: .isoLatin1)!
keyData.append(secretData)
let image = generatePDF417Barcode(from: keyData)

Heart Rate Value in BLE

I am having a hard time getting a valid value out of the HR characteristics. I am clearly not handling the values properly in Dart.
Example Data:
List<int> value = [22, 56, 55, 4, 7, 3];
Flags Field:
I convert the first item in the main byte array to binary to get the flags
22 = 10110 (as binary)
this leads me to believe that it is U16 (bit[0] is == 1)
HR Value:
Because it is 16 bit I am trying to get the bytes in the 1 & 2 indexes. I then try to buffer them into a ByteData. From there I get convert them to Uint16 with the Endian set to Little. This is giving me a value of 14136. Clearly I am missing something fundamental about how this is supposed to work.
Any help in clearing up what I am not understanding about how to process the 16 bit BLE values would be much appreciated.
Thank you.
/*
Constructor - constructs the heart rate value from a BLE message
*/
HeartRate(List<int> values) {
var flags = values[0];
var s = flags.toRadixString(2);
List<String> flagsArray = s.split("");
int offset = 0;
//Determine whether it is U16 or not
if (flagsArray[0] == "0") {
//Since it is Uint8 i will only get the first value
var hr = values[1];
print(hr);
} else {
//Since UTF 16 is two bytes I need to combine them
//Create a buffer with the first two bytes after the flags
var buffer = new Uint8List.fromList(values.sublist(1, 3)).buffer;
var hrBuffer = new ByteData.view(buffer);
var hr = hrBuffer.getUint16(0, Endian.little);
print(hr);
}
}
Your updated data looks much better. Here's how to decode it, and the process you'd use to figure this out yourself from scratch.
Determine the format
The Bluetooth site has been reorganized recently (~2020), and in particular they got rid of some of the document viewers, which makes things much harder to find and read IMO. All the documentation is in the Heart Rate Service (HRS) document, linked from the main GATT page, but for just parsing the format, the best source I know of is the XML for org.bluetooth.characteristic.heart_rate_measurement. (Since the reorganization, I don't know how you can find this page without searching for it. It doesn't seem to be linked anymore.)
Byte 0 - Flags: 22 (0001 0110)
Bits are numbered from LSB (0) to MSB (7).
Bit 0 - Heart Rate Value Format: 0 => UINT8 beats per minute
Bit 1-2 - Sensor Contact Status: 11 => Supported and detected
Bit 3 - Energy Expended Status: 0 => Not present
Bit 4 - RR-Interval: 1 => One or more values are present
The meaning of RR-intervals is explained in the HRS document, linked above. It sounds like you just want the heart rate value, so I won't go into them here.
Byte 1 - UINT8 BPM: 56
Since Bit 0 of flags was 0, this is the beats per minute. 56.
Bytes 2-5 - UINT16 RR Intervals: 55, 4, 7, 3
You probably don't care about these, but there are two UINT16 values here (there can be an arbitrary number of RR-Interval values). BLE is always little-endian, so [55, 4] is 1,079 (55 + 4<<8), and [7, 3] is 775 (7 + 3<<8).
I believe the docs are a little confusing on this one. The XML suggests that these values are in seconds, but the comments say "Resolution of 1/1024 second." The normal way to express this would be <BinaryExponent>-10</BinaryExponent>, and I'm certain that's what they meant. So these would be:
RR0: 1.05s (1079/1024)
RR1: 0.76s (775/1024)

TSQL int/int overflow

I have a column called Odo that contains the number of meters in a trip. I would normally divide that by 1000 to display the number of Km's.
The line of code in question:
convert(varchar(10), startPos.Distance / 1000)
causes the following error
Msg 8115, Level 16, State 8, Procedure sp_report_select_trip_start_and_stop, Line 7 [Batch Start Line 2]
Arithmetic overflow error converting numeric to data type numeric.
Msg 232, Level 16, State 2, Procedure sp_report_select_trip_start_and_stop, Line 9 [Batch Start Line 2]
Arithmetic overflow error for type varchar, value = 931.785156.
That number is clearly longer than my varchar. How do I divide, truncate to 1 decimal place and then convert?
Edit: SQL Server 2008 R2, so format() is not available
You can use format
format(startPos.Distance/1000, '#,###,###.#')
Or,
convert(varchar(10),cast(startPos.Distance/1000 as decimal(9,1)))
You may wish to introduce round() into these as well. e.g.
format(round(startPos.Distance/1000,1), '#,###,###.#')
or
convert(varchar(10),cast(round(startPos.Distance/1000,1) as decimal(9,1)))

Convert 24-bit ADC serial read data from 3-byte format to signed integer (int32) in Matlab

I am receiving EEG data from a 24 bit ADC over serial. The ADC data is transmitting in 3 bytes from MSB to LSB. The full packet is 21 bytes:
The first byte is the start byte - 0xFF (255 in decimal)
Then packet number byte.
Then the next 3 bytes are the 24 bit ADC value broken into MSB LSB2 LSB1
I can parse the data fine, but re-constructing a 2's complement signed int32 number is causing issues. The values I am getting out certainly don't reflect what the ADC should be giving out.
Below are the lines to read and parse the 504 samples (which gives me 24 ADC values (504samples/21bytes = 24 values)). I have tried uint8 instead of uchar with similar results (when I try int8 I get a invalid specified precision error).
comEEGSMT = serial(com,'BaudRate',3000000);
fopen(comEEGSMT);
rawData(1:504) = fread(comEEGSMT, 504, 'uchar');
fclose(comEEGSMT);
startPackets = find(rawData == 255);
bytes = rawData([startpackets+2 startpackets+3 startpackets+4]);
I have tried the following method to reconstruct the value:
ADC_value = bytes(:,1)*256^2 + bytes(:,2)*256 + bytes(:,3);
and the following line is the formula to convert the above number to volts:
ADC_value_volts = ADC_value*(5/3)*(1/(2^32));
The values are in the range of 4000 - 8000 microvolts with large jumps in value. The values SHOULD be in the range of 200 - 600 microvolts with small changes.
I have found other questions relating to similar issues, but have had no success trying the proposed solutions such as in the link below:
https://uk.mathworks.com/matlabcentral/answers/137965-concatenate-3-bytes-array-of-real-time-serial-data-into-single-precision
Any help would be very much appreciated as I've been stuck on this for quite long.
Thanks Mark
Starting with ADC_value as int32 with value 0, then:
ADC_value |= MSB << 16;
ADC_value |= LSB2 << 8;
ADC_value |= LSB1;
And then, to find out the corresponding volts value, supposing your ADC has a reference voltage VREF, in volts (e.g. 5.0V):
ADC_value_volts = (ADC_value * VREF)/2^24
since your converter is 24 bits, not 32.
Note the above expressions are in C language equivalent, not Matlab.
EDIT:
The ADC data sheet tell us the PGA gain can be set for the following values:
1, 2, 4, 6, 8, 12, 24, one value at the time for each channel.
The FSR (full scale range) of measurement is: (2*VREF)/Gain = 5/3, for Gain=6,
(eq.(5) page 23) so this must be accounted for in expression computing the volts
values. (these can be verified if you have access to the hardware and can make some
measurements).
Data resulted from ADC is already in two's complement, binary form, 24 bits.
The weird thing is the data sheet counts bits starting with 1, not 0, so this
is why shifting with "17" instead 16 - this is in fact 16 for coding.
(revealed in fig 47, page 42).
So the computing formula of ADC_value_volts should be:
ADC_value_volts = (AC_value * FSR/(2^23))/3 (1LSB=FSR/(2^23), pg.37)
If some other calculations/modifications from original, them these must be explained by provider.
If the provider is not friendly, worth to be changed...

T-SQL Decimal Multiplication

MSDN says about precision and scale of decimal multiplicatuion result:
The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.
So when we execute this:
DECLARE #a DECIMAL(18,9)
DECLARE #b DECIMAL(19,9)
set #a = 1.123456789
set #b = 1
SELECT #a * #b
the result is 1.12345689000000000 (9 zeros) and we see that it is not truncated because 18 + 19 + 1 = 38 (up limit).
When we raise precision of #a to 27 we lose all zeros and the result is just 1.123456789. Going futher we proceed with truncating and get the result being rounded. For example, raising precision of #a to 28 results in 1.12345679 (8 digits).
The interesting thing is that at some point, with precision equal to 30, we have 1.123457 and this result won't change any futher (it stops being truncated).
31, 32 and up to 38 results in the same. How could this be explained?
Decimal and numeric operation results have a minimum scale of 6 - this is specified in the table of the msdn documentation for division, but the same behavior applies for multiplication as well in case of scale truncation as in your example.
This behavior is described in more detail on the sqlprogrammability blog.