Having trouble with an Ada declaration "at 0 range 18..20" - variable-assignment

I'm having trouble with this variable declaration:
Code_Length at 0 range 18..20;
I'm familiar with constraints, but the at 0 is what's giving me fits, and I can't find any working examples online anywhere else.
If I had to guess (and I'm totally guessing), the at 0 initializes the value to 0, then the constraint is enforced on any subsequent assignment operation. But I can't find anything to verify that.

That's not a variable declaration, that's a representation clause for a field of a record. Whatever record declaration you excerpted that from has a field named "Code_Length". And this representation clause indicates that the storage for it (3 bits) will be offset 0 bytes from the start of whole record's storage, and occupy bits 18 through 20.
Providing more contextual code would help the explanation.

Related

Why does Swift String.Index keeps its index value 4 times bigger than real?

I was trying to implement Boyer-Moore algorithm in Swift Playground and I used Swift String.Index a lot and something that started to bother me is why indexes are kept 4 times bigger that what it seems they should be.
For example:
let why = "is s on 4th position not 1st".index(of: "s")
This code in Swift Playground will generate _compoundOffset 4 not 1. I'm sure there is a reason for doing this, but I couldn't find explanation anywhere.
It's not a duplicate of any question that explains how to get index of char in Swift, I know that, I used index(of:) function just to illustrate the question. I wanted to know why value of 2nd char is 4 not 1 when using String.Index.
So I guess the way it keeps indexes is private and I don't need to know the inside implementation, it's probably connected with UTF16 and UTF32 coding.
First of all, don’t ever assume _compoundOffset to be anything else than an implementation detail. _compoundOffset is an internal property of String.Index that uses bit masking to store two values in this one number:
The encodedOffset, which is the index's byte offset in terms of UTF-16 code units. This one is public and can be relied on. In your case encodedOffset is 1 because that's the offset for that character, as measured in UTF-16 code units. Note that the encoding of the string in memory doesn't matter! encodedOffset is always UTF-16.
The transcodedOffset, which stores the index's offset inside the current UTF-16 code unit. This is also an internal property that you can't access. The value is usually 0 for most indices, unless you have an index into the string's UTF-8 view that refers to a code unit which doesn't fall on a UTF-16 boundary. In that case, the transcodedOffset will store the offset in bytes from the encodedOffset.
Now why is _compoundOffset == 4? Because it stores the transcodedOffset in the two least significant bits and the encodedOffset in the 62 most significant bits. So the bit pattern for encodedOffset == 1, transcodedOffset == 0 is 0b100, which is 4.
You can verify all this in the source code for String.Index.

How to declare any-range string element inside an user-defined type in QBasic?

I'm learning QBasic and found an user-defined type example code in a documentation. In this example there's a string element inside an user-defined type, and that string doesn't have a length defined.
However my compiler throws the exception "Expected STRING * on..." for this example. Test-case defining the string length:
TYPE Person
name AS STRING * 4
END TYPE
DIM Matheus AS Person:
Matheus.name = "Matheus":
PRINT Matheus.name:
It logs "Math", expected "Matheus". Is there a way to allow any range for this string?
Note: I'm using QB64 compiler
No, there is not a way to use a variable-length string, even with QB64. You might look into FreeBASIC if you want this feature since it offers it.
TYPE creates a record type with the specified fields, and records have a fixed length. Look at the OPEN ... FOR RANDOM specification:
OPEN Filename$ FOR RANDOM AS #1 [LEN = recordlength%]
recordlength% is determined by getting the LEN of a TYPE variable or a FIELD statement.
If no record length is used in the OPEN statement, the default record size is 128 bytes except for the last record.
A record length cannot exceed 32767 or an error will occur!
TYPE was never intended to contain strings that are dynamically sized. This allows a developer to keep record sizes small. If you had an address book, for example, you wouldn't want people's names to be too large, else the address book wouldn't fit in memory.
QB64 didn't remove that restraint, probably to keep things compatible with older QBASIC code since the original goal was to preserve compatibility.

What should '{default:'1} do in system verilog?

I have an array that I would like to initialize to all 1. To do this, I used the following code snippet:
logic [15:0] memory [8];
always_ff #(posedge clk or posedge reset) begin
if(reset) begin
memory <= '{default:'1};
end
else begin
...
end
end
My simulator does what I think is the correct thing and sets the registers to 16'hFFFF on reset. However, my lint tool gives me a warning that bit 0 has an async set while bits 1 through 15 have async resets. This implies that the linter thinks that this code assigns 16'h0001 to the registers.
Since both tools come from the same vendor I file a bug report since they can't both be right.
The question is: Which behavior is correct according to the spec? There is no example that shows this exact situation. Section 5.7.1 mentions that:
An unsized single-bit value can be specified by preceding the single-bit value with an apostrophe ( ' ), but
without the base specifier. All bits of the unsized value shall be set to the value of the specified bit. In a
self-determined context, an unsized single-bit value shall have a width of 1 bit, and the value shall be treated
as unsigned.
'0, '1, 'X, 'x, 'Z, 'z // sets all bits to specified value
I f this is a "self-determined context" then the answer is 1 bit sign extended to 16'h0001, but if it is not, then I guess the example which says it "sets all bits to the specified value" applies. I am not sure if this is a self -determined context.
The simulator is correct: memory <= '{default:'1}; will assign all each bit in memory to 1. The linting tool does have a bug. See IEEE Std 1800-2012 § 10.9.1 Array assignment patterns:
The **default:***value* applies to elements or subarrays that are not matched by either index or type key. If the type of the element or subarray is a simple bit vector type, matches the self-determined type of the value, or is not an array or structure type, then the value is evaluated in the context of each assignment to an element or subarray by the default and shall be castable to the type of the element or subarray; otherwise, an error is generated. ...
The LRM goes beyond what is synthesizable when it comes to assignment patterns. And most of the tools are sill playing catchup with supporting all the SystemVerilog features. Experiment to make sure your tools (simulator,synthesizer, lint tool, logic-equivalency-checker, etc.) all have the necessary support for the features you want.

integer declaration issue

I am struggling with bug in my program. and finally i got the point. here integer shows value 1 at the time of declaration. I clean and build again. but it shows 1 value?
please any one explain me why this happen?
When you declare a local variable without specifying a value, you need to assign it first before reading from it becomes valid. The 1 that you see in your integer variable could be any garbage value, it is unspecified. Reading this value is undefined behavior.
int numberOfRecords = 0;
This is different from instance variables, which are initialized by default.

What is the default hash code that Mathematica uses?

The online documentation says
Hash[expr]
gives an integer hash code for the expression expr.
Hash[expr,"type"]
gives an integer hash code of the specified type for expr.
It also gives "possible hash code types":
"Adler32" Adler 32-bit cyclic redundancy check
"CRC32" 32-bit cyclic redundancy check
"MD2" 128-bit MD2 code
"MD5" 128-bit MD5 code
"SHA" 160-bit SHA-1 code
"SHA256" 256-bit SHA code
"SHA384" 384-bit SHA code
"SHA512" 512-bit SHA code
Yet none of these correspond to the default returned by Hash[expr].
So my questions are:
What method does the default Hash use?
Are there any other hash codes built in?
The default hash algorithm is, more-or-less, a basic 32-bit hash function applied to the underlying expression representation, but the exact code is a proprietary component of the Mathematica kernel. It's subject to (and has) change between Mathematica versions, and lacks a number of desirable cryptographic properties, so I personally recommend you use MD5 or one of the SHA variants for any serious application where security matters. The built-in hash is intended for typical data structure use (e.g. in a hash table).
The named hash algorithms you list from the documentation are the only ones currently available. Are you looking for a different one in particular?
I've been doing some reverse engeneering on 32 and 64 bit Windows version of Mathematica 10.4 and that's what I found:
32 BIT
It uses a Fowler–Noll–Vo hash function (FNV-1, with multiplication before) with 16777619 as FNV prime and ‭84696351‬ as offset basis. This function is applied on Murmur3-32 hashed value of the address of expression's data (MMA uses a pointer in order to keep one instance of each data). The address is eventually resolved to the value - for simple machine integers the value is immediate, for others is a bit trickier. The Murmur3-32 implementing function contains in fact an additional parameter (defaulted with 4, special case making behaving as in Wikipedia) which selects how many bits to choose from the expression struct in input. Since a normal expression is internally represented as an array of pointers, one can take the first, the second etc.. by repeatedly adding 4 (bytes = 32 bit) to the base pointer of the expression. So, passing 8 to the function will give the second pointer, 12 the third and so on. Since internal structs (big integers, machine integers, machine reals, big reals and so on) have different member variables (e.g. a machine integer has only a pointer to int, a complex 2 pointers to numbers etc..), for each expression struct there is a "wrapper" that combine its internal members in one single 32-bit hash (basically with FNV-1 rounds). The simplest expression to hash is an integer.
The murmur3_32() function has 1131470165 as seed, n=0 and other params as in Wikipedia.
So we have:
hash_of_number = 16777619 * (84696351‬ ^ murmur3_32( &number ))
with " ^ " meaning XOR.
I really didn't try it - pointers are encoded using WINAPI EncodePointer(), so they can't be exploited at runtime. (May be worth running in Linux under Wine with a modified version of EncodePonter?)
64 BIT
It uses a FNV-1 64 bit hash function with 0xAF63BD4C8601B7DF as offset basis and 0x100000001B3 as FNV prime, along with a SIP64-24 hash (here's the reference code) with the first 64 bit of 0x0AE3F68FE7126BBF76F98EF7F39DE1521 as k0 and the last 64 bit as k1. The function is applied to the base pointer of the expression and resolved internally. As in 32-bit's murmur3, there is an additional parameter (defaulted to 8) to select how many pointers to choose from the input expression struct. For each expression type there is a wrapper to condensate struct members into a single hash by means of FNV-1 64 bit rounds.
For a machine integer, we have:
hash_number_64bit = 0x100000001B3 * (0xAF63BD4C8601B7DF ^ SIP64_24( &number ))
Again, I didn't really try it. Could anyone try?
Not for the faint-hearted
If you take a look at their notes on internal implementation, they say that "Each expression contains a special form of hash code that is used both in pattern matching and evaluation."
The hash code they're referring to is the one generated by these functions - at some point in the normal expression wrapper function there's an assignment that puts the computed hash inside the expression struct itself.
It would certainly be cool to understand HOW they can make use of these hashes for pattern matching purpose. So I had a try running through the bigInteger wrapper to see what happens - that's the simplest compound expression.
It begins checking something that returns 1 - dunno what.
So it executes
var1 = 16777619 * (67918732 ^ hashMachineInteger(1));
with hashMachineInteger() is what we said before - including values.
Then it reads the length in bytes of the bigInt from the struct (bignum_length) and runs
result = 16777619 * (v10 ^ murmur3_32(v6, 4 * v4));
Note that murmur3_32() is called if 4 * bignum_length is greater than 8 (may be related to the max value of machine integers $MaxMachineNumber 2^32^32 and by converse to what a bigInt is supposed to be).
So, the final code is
if (bignum_length > 8){
result = 16777619 * (16777619 * (67918732 ^ ( 16777619 * (84696351‬ ^ murmur3_32( 1, 4 )))) ^ murmur3_32( &bignum, 4 * bignum_length ));
}
I've made some hypoteses on the properties of this construction. The presence of many XORs and the fact that 16777619 + 67918732 = 84696351‬ may make one think that some sort of cyclic structure is exploited to check patterns - i.e. subtracting the offset and dividing by the prime, or something like that. The software Cassandra uses the Murmur hash algorithm for token generation - see these images for what I mean with "cyclic structure". Maybe various primes are used for each expression - must still check.
Hope it helps
It seems that Hash calls the internal Data`HashCode function, then divides it by 2, takes the first 20 digits of N[..] and then the IntegerPart, plus one, that is:
IntegerPart[N[Data`HashCode[expr]/2, 20]] + 1