I'm using Ada's Directories library and Command_Line library, and reading the character count (size) of a text file given in the command line, then assigning it to an integer.
Character_Count := Size(Argument(1));
The compiler is telling me that Integer and File_Size don't match up, even though File_Size is a subtype of Integer, I'm pretty sure. How can I convert it?
Ada.Directories.File_Size is not a subtype of Integer.
It's defined in the language reference manual as:
type File_Size is range 0 .. *implementation-defined*;
If you think about it, it wouldn't make much sense for it to be a subtype; Integer can be as narrow as 16 bits, which is hardly enough to hold the size of an arbitrary file.
You can use a conversion to convert to Integer:
Character_Count := Integer(Size(Argument(1)));
but it would probably be much better to declare Character_Count as a File_Size in the first place.
Related
I try some examples from Rosettacode and encounter an issue with the provided Ackermann example: When running it "unmodified" (I replaced the utf-8 variable names by latin-1 ones), I get (similar, but now copyable):
$ perl6 t/ackermann.p6
65533
19729 digits starting with 20035299304068464649790723515602557504478254755697...
Cannot unbox 65536 bit wide bigint into native integer
in sub A at t/ackermann.p6 line 3
in sub A at t/ackermann.p6 line 11
in sub A at t/ackermann.p6 line 3
in block <unit> at t/ackermann.p6 line 17
Removing the proto declaration in line 3 (by commenting out):
$ perl6 t/ackermann.p6
65533
19729 digits starting with 20035299304068464649790723515602557504478254755697...
Numeric overflow
in sub A at t/ackermann.p6 line 8
in sub A at t/ackermann.p6 line 11
in block <unit> at t/ackermann.p6 line 17
What went wrong? The program doesn't allocate much memory. Is the natural integer kind-of limited?
I replaced in the code from Ackermann function the 𝑚 with m and the 𝑛 with n for better terminal interaction for copying errors and tried to comment out proto declaration. I also asked Liz ;)
use v6;
proto A(Int \m, Int \n) { (state #)[m][n] //= {*} }
multi A(0, Int \n) { n + 1 }
multi A(1, Int \n) { n + 2 }
multi A(2, Int \n) { 3 + 2 * n }
multi A(3, Int \n) { 5 + 8 * (2 ** n - 1) }
multi A(Int \m, 0 ) { A(m - 1, 1) }
multi A(Int \m, Int \n) { A(m - 1, A(m, n - 1)) }
# Testing:
say A(4,1);
say .chars, " digits starting with ", .substr(0,50), "..." given A(4,2);
A(4, 3).say;
Please read JJ's answer first. It's breezy and led to this answer which is effectively an elaboration of it.
TL;DR A(4,3) is a very big number, one that cannot be computed in this universe. But raku(do) will try. As it does you will blow past reasonable limits related to memory allocation and indexing if you use the caching version and limits related to numeric calculations if you don't.
I try some examples from Rosettacode and encounter an issue with the provided Ackermann example
Quoting the task description with some added emphasis:
Arbitrary precision is preferred (since the function grows so quickly)
raku's standard integer type Int is arbitrary precision. The raku solution uses them to compute the most advanced answer possible. It only fails when you make it try to do the impossible.
When running it "unmodified" (I replaced the utf-8 variable names by latin-1 ones)
Replacing the variable names is not a significant change.
But adding the A(4,3) line shifted the code from being computable in reality to not being computable in reality.
The example you modified has just one explanatory comment:
Here's a caching version of that ... to make A(4,2) possible
Note that the A(4,2) solution is nearly 20,000 digits long.
If you look at the other solutions on that page most don't even try to reach A(4,2). There are comments like this one on the Phix version:
optimised. still no bignum library, so ack(4,2), which is power(2,65536)-3, which is apparently 19729 digits, and any above, are beyond (the CPU/FPU hardware) and this [code].
A solution for A(4,2) is the most advanced possible.
A(4,3) is not computable in practice
To quote Academic Kids: Ackermann function:
Even for small inputs (4,3, say) the values of the Ackermann function become so large that they cannot be feasibly computed, and in fact their decimal expansions cannot even be stored in the entire physical universe.
So computing A(4,3).say is impossible (in this universe).
It must inevitably lead to an overflow of even arbitrary precision integer arithmetic. It's just a matter of when and how.
Cannot unbox 65536 bit wide bigint into native integer
The first error message mentions this line of code:
proto A(Int \m, Int \n) { (state #)[m][n] //= {*} }
The state # is an anonymous state array variable.
By default # variables use the default concrete type for raku's abstract array type. This default array type provides a balance between implementation complexity and decent performance.
While computing A(4,2) the indexes (m and n) remain small enough that the computation completes without overflowing the default array's indexing limit.
This limit is a "native" integer (note: not a "natural" integer). A "native" integer is what raku calls the fixed width integers supported by the hardware it's running on, typically a long long which in turn is typically 64 bits.
A 64 bit wide index can handle indices up to 9,223,372,036,854,775,807.
But in trying to compute A(4,3) the algorithm generates a 65536 bits (8192 bytes) wide integer index. Such an integer could be as big as 265536, a 20,032 decimal digit number. But the biggest index allowed is a 64 bit native integer. So unless you comment out the caching line that uses an array, then for A(4,3) the program ends up throwing the exception:
Cannot unbox 65536 bit wide bigint into native integer
Limits to allocations and indexing of the default array type
As already explained, there is no array that could be big enough to help fully compute A(4,3). In addition, a 64 bit integer is already a pretty big index (9,223,372,036,854,775,807).
That said, raku can accommodate other array implementations such as Array::Sparse so I'll discuss that briefly below because such possibilities might be of interest for other problems.
But before discussing bigger arrays, running the code below on tio.run shows the practical limits for the default array type on that platform:
my #array;
#array[2**29]++; # works
#array[2**30]++; # could not allocate 8589967360 bytes
#array[2**60]++; # Unable to allocate ... 1152921504606846977 elements
#array[2**63]++; # Cannot unbox 64 bit wide bigint into native integer
(Comment out error lines to see later/greater errors.)
The "could not allocate 8589967360 bytes" error is a MoarVM panic. It's a result of tio.run refusing a memory allocation request.
I think the "Unable to allocate ... elements" error is a raku level exception that's thrown as a result of exceeding some internal Rakudo implementation limit.
The last error message shows the indexing limit for the default array type even if vast amounts of memory were made available to programs.
What if someone wanted to do larger indexing?
It's possible to create/use other # (does Positional) data types that support things like sparse arrays etc.
And, using this mechanism, it's possible that someone could write an array implementation that supports larger integer indexing than is supported by the default array type (presumably by layering logic on top of the underlying platform's instructions; perhaps the Array::Sparse I linked above does).
If such an alternative were called BigArray then the cache line could be replaced with:
my #array is BigArray;
proto A(Int \𝑚, Int \𝑛) { #array[𝑚][𝑛] //= {*} }
Again, this still wouldn't be enough to store interim results for fully computing A(4,3) but my point was to show use of custom array types.
Numeric overflow
When you comment out the caching you get:
Numeric overflow
Raku/Rakudo do arbitrary precision arithmetic. While this is sometimes called infinite precision it obviously isn't actually infinite but is instead, well, "arbitrary", which in this context also means "sane" for some definition of "sane".
This classically means running out of memory to store a number. But in Rakudo's case I think there's an attempt to keep things sane by switching from a truly vast Int to a Num (a floating point number) before completely running out of RAM. But then computing A(4,3) eventually overflows even a double float.
So while the caching blows up sooner, the code is bound to blow up later anyway, and then you'd get a numeric overflow that would either manifest as an out of memory error or a numeric overflow error as it is in this case.
Array subscripts use native ints; that's why you get the error in line 3, when you use the big ints as array subscripts. You might have to define a new BigArray that uses Ints as array subscripts.
The second problem arises in the ** operator: the result is a Real, and when the low-level operations returns a Num, it throws an exception.
https://github.com/rakudo/rakudo/blob/master/src/core/Int.pm6#L391-L401
So creating a BigArray might not be helpful anyway. You'll have to create your own ** too, that always works with Int, but you seem to have hit the (not so infinite) limit of the infinite precision Ints.
My CRC calculation algorithm requires that I initialize a 32-bit unsigned integer variable to 0xFFFFFFFF (all 1's in binary so that I can use the variable as a bitmask). If I just write [uint32]$r=0xFFFFFFFF, I get the following error message:
Cannot convert value "-1" to type "System.UInt32". Error: "Value was either too large or too small for a UInt32."
The syntax I'm currently using is [uint32]$r="0xFFFFFFFF", but it seems a bit over the top with the string to integer conversion (I'm coming from the C/C++ programming world). I'm pretty new to Powershell, so I was wondering if Powershell has a more efficient/straightforward way of initializing a variable like this.
How about [UInt32]::MaxValue.
The value of this constant is 4,294,967,295; that is, hexadecimal 0xFFFFFFFF.
PowerShell doesn't have any syntax for UInt32 literals, but you could cast an Int64 literal to [uint32] with the l type postfix:
[uint32]0xFFFFFFFFl
I went through the documentation on character types of PostgreSQL. But still I have some questions
"char" is fixed length i,e 1
if so what is length of "char[]" because I am not able to change it in pgadmin I thought it is used for variable length character array. So, what is the actual default size?
What is use of character as compared to "char"[]? For now I assume character is used for fixed length character array for which we define size.
Why character[] is used?
What is difference between character varying and character varying[]?
In case you come from a C background, a character string type in PostgreSQL is not an array of characters.
For each type foo in PostgreSQL, there is a corresponding array type foo[] that denotes an array of values of type foo. Use array types only if you do not plan to manipulate them much inside the database; if you do, it is usually better to normalize the array into a separate table.
Leaving aside array types, there are different character types:
"char" (the double quotes are required): a single character. Mostly used in catalog tables. Don't use this type unless you know what you are doing.
character(n) or char(n): fixed length characters string. No matter what you store there, it will always be padded with spaces on the right side. The behaviour, as dictated by the SQL standard, is sometimes surprising, so you rarely want this type.
text: arbitrary-length character string. This is the type you want for characters strings unless you want the database to impose a length limit.
character varying(n) or varchar(n): this is the same as text with an additional length limit.
To round it off with an example:
CREATE TABLE strtest(
id serial PRIMARY KEY,
sc "char",
c character(10),
vc character varying(10),
vca character varying(10)[]
);
INSERT INTO strtest (sc, c, vc, vca)
VALUES (
'x',
'short',
'short',
ARRAY['short1', 'short2', 'short3']
);
SELECT sc, c, vc, vca[2] FROM strtest;
sc | c | vc | vca
----+------------+-------+--------
x | short | short | short2
(1 row)
I have data written into short data type. The data written is of 2's complement form.
Now when I try to print the data using %04x, the data with MSB=0 is printed fine for eg if data=740, print I get is 0740
But when the MSB=1, I am unable to get a proper print. For eg if data=842, print I get is fffff842
I want the data truncated to 4 bytes so expected output is f842
Either declare your data as a type which is 16 bits long, or make sure the printing function uses the right format for 16 bits value. Or use your current type, but do a bitwise AND with 0xffff. What you can do depends on the language you're doing it in really.
But whichever way you go, check your assumptions again. There seems to be a few issues in your question:
2s-complement applies to signed numbers only. There are no negative numbers in your question.
Assuming you mean C's short - it doesn't have to be 16 bits long.
"I get is fffff842 I want the data truncated to 4 bytes" - fffff842 is 4 bytes long. f842 is 2 bytes long.
2-bytes long value 842 does not have the MSB set.
I'm assuming C (or possibly C++) as the language here.
Because of the default argument promotions involved when calling a variable argument function (such as printf), your use of a short will result in an integer promotion, which states that "If an int can represent all values of the original type (as restricted by the width, for a
bit-field), the value is converted to an int".
A short is converted to an int by means of sign-extension, and 0xf842 sign-extended to 32 bits is 0xfffff842.
You can use a bitwise AND to mask off the most significant word:
printf("%04x", data & 0xffff);
You could also add the h length specifier to state that you only want to print an (unsigned) short worth of bits from an int:
printf("%04hx", data);
Suppose I have a value of type json, say y. One may obtain such a value through, for example, obj->'key', or any function that returns values of type json.
This value, when cast to text, includes quotation marks i.e. "y" instead of y. In cases where using json types is unavoidable, this poses a problem, especially when we wish to compare the value with literal strings e.g.
select foo(x)='bar';
The API Brainstorm page suggests a from_json function that will intelligently unwrap JSON strings, but I doubt that is available yet. In the meantime, how can one convert JSON strings to text without the quotation marks?
Text:
To extract a value as text, use #>>:
SELECT to_json('foo'::text) #>> '{}';
From: Postgres: How to convert a json string to text?
PostgreSQL doc page: https://www.postgresql.org/docs/11/functions-json.html
So it addresses your question specifically, but it doesn't work with any other types, like integer or float for example. The #> operator will not work for other types either.
Numbers:
Because JSON only has one numeric type, "number", and has no concept of int or float, there's no obvious way to cast a JSON type to a "correct" numeric type. It's best to know the schema of your JSON, extract the text and then cast to the correct type:
SELECT (('{"a":2.01}'::json)->'a'#>>'{}')::float
PostgreSQL does however have support for "arbitrary precision numbers" ("up to 131072 digits before the decimal point; up to 16383 digits after the decimal point") with its "numeric" type. JSON also supports 'e' notation for large numbers.
Try this to test them both out:
SELECT (('{"a":2e99999}'::json)->'a'#>>'{}')::numeric
The ->> operator unwraps quotation marks correctly. In order to take advantage of that operator, we wrap up our value inside an array, and then convert that to json.
CREATE OR REPLACE FUNCTION json2text(IN from_json JSON)
RETURNS TEXT AS $$
BEGIN
RETURN to_json(ARRAY[from_json])->>0;
END; $$
LANGUAGE plpgsql;
For completeness, we provide a CAST that makes use of the function above.
CREATE CAST (json AS text) WITH json2text(json) AS ASSIGNMENT;