What is the format of the ContentTypeID for Approval Forms in the xoml.wfconfig.xml files of SharePoint Designer 2010 workflows? - sharepoint-designer

I can see that the ID has 74 hex digits with the first 24 digits being the Task Content Type ID, which the first part of the Content Type ID of the form (the first 48 hex digits), but this is followed by 00 then 24 more hex digits.
What are these last 24 hex digits?

This identifies the copy of the content type associated with the list (it inherits from the parent content type whose id is everything before the last instance of 00.

Related

DSNACICS stored procedure - Way to pass comp and comp3 fields in commarea parameter

I have a question about passing COMP and COMP-3 field data in DFHCOMMAREA in the context of the DSNACICS stored procedure.
If the field is X(3), I pass 3 characters with leading spaces in the case that the data is smaller. However, if the field is say S9(4) COMP, how many characters should I pass in DFHCOMMAREA in case I have to send a value of 2?
Generally, and not specifically to DSNACICS, a PIC S9(4) COMP field in COBOL is a half word binary field, meaning that it occupies 2 bytes of physical space with the value represented in hexadecimal. A PIC S9(4) COMP field can store a value range from -32,768 (8000 in hex) to +32,767 (7FFF in hex).
However be aware of the TRUNC compiler option that you have in use. If a program passing the data is compiled as TRUNC(BIN) you can use the range of values above in the field. If however you have the TRUNC(OPT) compiler option specified and you move the value 32767 into the PIC S9(4) COMP field, you are likely to end up with the value 2767 actually placed in the variable, rather than 32767! i.e. not the value you expected. (That one has had me once or twice).
Here is a reference to a page in the doc that might be helpful https://www.ibm.com/docs/en/cobol-zos/6.1?topic=data-examples-numeric-internal-representation
If you wanted to move the value of 2 into a PIC S9(4) COMP field, internally it would be 2 bytes represented (in hex) as 0002.

Floating Point Number to Null-Terminated ASCII String

I'm reviewing for an exam right now and one of the review questions gives an answer that I'm not understanding.
A main memory location of a MIPS processor based computer contains the following bit pattern:
0 01111110 11100000000000000000000
a. If this is to be interpreted as a NULL-terminated string of ASCII characters, what is the string?
The answer that's given is "?p" but I'm not sure how they got that.
Thanks!
All ASCII characters are made up of 8 bits. So given your main memory location, we can break it up into a few bytes.
00111111
01110000
00000000
...
Null terminated strings are terminated with none other than... a null byte! (A byte with all zeros). So this means that your string contains two bytes that are ASCII characters. Byte 1 has a value of 63 and byte two has a value of 112. If you have a look at an ASCII chart like this one you'll see that 63 corresponds to '?' and 112 corresponds to 'p'.

How my file names have been encoded?

After a long time, I come to review the contents of my HDD and see a weird file name.
I'm not sure what tool or program has changed it this way, but when I see the content of the file I could find its original name.
Anyway, I'm encountering a type of encoding and I want to find it. It's not complicated. Mostly for those who are familiar with unicode and utf8. Now I map them and you guess what has happened.
In the following, I give a table which maps the characters. In the second column, there's utf8 form and in the first column there's its equivalent character which is converted.
I need to know what happened and how is it converted to convert it back to utf8. that is, what I have is in the first column, and what I need to get is in the second column:
Hide Copy Code
638 2020 646
639 AF 6AF
637 A7 627
637 B1 631
637 B3 633
637 6BE 62A
20 20
638 67E 641
63A 152 6CC
For more description, consider the first row, utf8 form is 46 06 (type bytes) or 0x0646. The file name for this character is converted into two wide-characters, 0x0638 0x2020.
I found the solution myself.
In Notepad++:
Select "Encode in ANSI" from Encoding menu.
Paste the corrupted text.
Select "Encode in UTF-8" from Encoding menu.
That's it. The correct text will be displayed.
If so, how can I do the same with Perl?

Interpretation of ambiguous dates by Microsoft Access textbox

I've been searching around without any luck for an MSDN or any other official specification which describes how 2 digit years are interpreted in a date format textbox. That is, when data is manually entered into a textbox on a form, with the format sent to short date. (My current locale defines dates as yyyy/MM/dd)
A few random observations (conversion from entered date)
29/12/31 --> 2029/12/31
30/1/1 --> 1930/01/01
So far it makes sense, the range for 2 digit dates is 1930 to 2029. Then as we go on,
1/2/32 --> 1932/01/02 (interpreted as M/d/yy)
15/2/28 --> 2015/02/28 (interpreted as yy/M/dd)
15/2/29 --> 2029/02/15 (interpreted as M/d/yy)
2/28/16 --> 2016/02/28 (interpreted as M/dd/yy)
2/29/15 --> 2029/02/15 (interpreted as M/yy/dd)
It tries to twist about invalid dates so that they are valid in some format, but seem to ignore the system locale setting for dates. Only the ones that are invalid in any format (like 0/0/1) seem to generate an error. Is this behavior documented somewhere?
(I only want to refer the end user to this documentation, I have no problem with the actual behavior)
The 29/30 split was settled this way with Access 2.0 as of 1999-12-17 in the Acc2Date.exe Readme File as part of the last Y2K update:
Introduction
The Acc2Date.exe file contains three updated files that modify the way
Microsoft Access 2.0 interprets two-digit years. By default, Access
2.0 interprets all dates that are entered by the user or imported from a text file to fall within the 1900s. After you apply the updated
files, Access 2.0 will treat two-digit dates that are imported from
text in the following manner:
00 to 29 - resolve to the years 2000 to 2029 30 to 99 - resolve
to the years 1930 to 1999
Years that are entered into object property sheets, the query design
grid, or expressions in Access modules will be interpreted based on a
100-year sliding date window as defined in the Win.ini on the computer
that is running Access 2.0.
The Acc2Date.exe file contains the following files:
File name Version Description
---------------------------------------------------------------------
MSABC200.DLL 2.03 The Updated Access Basic file
MSAJT200.DLL 2.50.2825 The Updated Access Jet Engine Library file
MSAJU200.DLL 2.50.2819 The Updated Access Jet Utilities file
Readme.txt n/a This readme file
For more information about the specific issues solved by this update,
see the following articles in the Microsoft Knowledge Base:
Article ID: Q75455
Title : ACC2: Years between 00 and 29 Are Interpreted as 1900 to 1929
That article can be found here as KB75455 (delayed page load):
ACC2: Years Between 00 and 29 Are Interpreted as 1900 to 1929
As for the 2/29/15 is not accepted here where system default is dd-mm-yyyy, so there are limits to how much creativity Access/VBA puts into interpreting date expressions.

Confused about BER (Basic Encoding Rules)

I'm trying to study and understand BER (Basic Encoding Rules).
I've been using the website http://asn1-playground.oss.com/ to experiment with different ASN.1 objects and encoding them using BER.
However, even the simplest encodings seem to confuse me.
Let's take a simple ASN.1 schema:
World-Schema DEFINITIONS AUTOMATIC TAGS ::=
BEGIN
Human ::= SEQUENCE {
name UTF8String
}
END
So basically this is just a SEQUENCE with a single UTF8String type field called name.
An example of a value that matches this sequence would be something like:
{ "Bob" }
So, using http://asn1-playground.oss.com/, I produce the BER encoding of the following data:
some-guy Human ::=
{
name "Bob"
}
I would expect this to produce one sequence object, followed by a single string object.
What I get is:
30 05 80 03 42 6F 62
Now, I understand some of this encoding. The first octet, 30, is the identifier which tells us that a SEQUENCE type is the first object. The 30 is 00110000 in binary, which means that we have a class of 0, a PC (primitive/constructed) bit of 1 (meaning constructed), and a tag number of 10000 (16 in decimal) which means SEQUENCE
So far so good. The next value is the LENGTH in bytes of the SEQUENCE, which is 05.
Okay, still so far so good.
But then... I'm totally confused by the next octet 80. What does that mean??? I would have expected a value of 00001100 (for tag number 12, meaning UTF8String.)
The bytes following the 80 are pretty straightforward: the 03 means Length of 3, and the 42 6F 62 is just the UTF8String value itself, "Bob"
The 80 is a context-specific tag 0. Please note that "AUTOMATIC TAGS" is used at the beginning of the module. This indicates that all SEQUENCE, SET and CHOICE types will have context specific tags for their components starting with [0], and incrementing by 1 for each subsequent component. This way, you don't have to worry about tag conflicts when creating your messages, especially when dealing with components which are OPTIONAL or have a DEFAULT value. If you change "AUTOMATIC" to "EXPLICIT" (which I would not recommend) you will see the [UNIVERSAL 12] that you were expecting in the encoding.
Please note that AUTOMATIC TAGS applied only to tags on components of SEQUENCE, SET, or CHOICE. It does not apply to the top level components, which is why you saw the [UNIVERSAL 16] for the SEQUENCE rather than seeing a context-specific tag there also.
80 indicates context specific class, primitive, tag number 0. This is there because you specified an AUTOMATIC TAGGING environment, which automatically assigned a [0] tag to field name in type Human.