I was wondering how Postgresql converts floating point (float4) values to NUMERIC.
I chose 0.1 as a testing value. This value is not accurately representable in base2, see https://float.exposed/0x3dcccccd for a visualization. So the stored value 0x3dcccccd in hex for a float4 is actually not 0.1 but 0.100000001490116119385.
However, I do not understand the output of the following commands:
mydb=# SELECT '0.100000001490116119385'::float4::numeric(50,50);
numeric
------------------------------------------------------
0.10000000000000000000000000000000000000000000000000
(1 row)
mydb=# SELECT '0.1'::float4::numeric(50,50);
numeric
------------------------------------------------------
0.10000000000000000000000000000000000000000000000000
mydb=# SELECT '0.10000000000000000000000000000000001'::float4::numeric(50,50);
numeric
------------------------------------------------------
0.10000000000000000000000000000000000000000000000000
Why (and how) do I get 0.1 as a result in all cases? Both, 0.1 and 0.10000000000000000000000000000000001 cannot be accurately stored in a float4. The value that can be stored is 0.100000001490116119385 which is also the closest float4 value in both cases, but that's not what I get when casting to numeric. Why?
From the source code:
Datum
float4_numeric(PG_FUNCTION_ARGS)
{
float4 val = PG_GETARG_FLOAT4(0);
Numeric res;
NumericVar result;
char buf[FLT_DIG + 100];
if (isnan(val))
PG_RETURN_NUMERIC(make_result(&const_nan));
if (isinf(val))
{
if (val < 0)
PG_RETURN_NUMERIC(make_result(&const_ninf));
else
PG_RETURN_NUMERIC(make_result(&const_pinf));
}
snprintf(buf, sizeof(buf), "%.*g", FLT_DIG, val);
init_var(&result);
/* Assume we need not worry about leading/trailing spaces */
(void) set_var_from_str(buf, buf, &result);
res = make_result(&result);
free_var(&result);
PG_RETURN_NUMERIC(res);
}
Further explanation of Frank Heikens's answer
source code idea is get the float4 input. convert to char string, then convert to numeric.
Key function is
snprintf(buf, sizeof(buf), "%.*g", FLT_DIG, val);
FLT_DIG is equal to 6.
https://pubs.opengroup.org/onlinepubs/7908799/xsh/fprintf.html
An optional precision that gives the minimum number of digits to
appear for the d, i, o, u, x and X conversions; the number of digits
to appear after the radix character for the e, E and f conversions;
the maximum number of significant digits for the g and G conversions;
or the maximum number of bytes to be printed from a string in s and S
conversions. The precision takes the form of a period (.) followed
either by an asterisk (*), described below, or an optional decimal
digit string, where a null digit string is treated as 0. If a
precision appears with any other conversion character, the behaviour
is undefined.
float convert to text then to numeric processs: the text after decimal delimiter can only have 6 digits precision!
snprintf example: https://legacy.cplusplus.com/reference/cstdio/snprintf/
further post: Avoid trailing zeroes in printf()
I am attempting to compare one character of a string to see if it is my delimiter character. However, when I execute the following code the value that gets placed in the variable valstring is a number that represents the byte that was converted to a string and not a character itself. For Example the value may be the string '58'.
Through my testing in CoDeSys using the debugging features I know that the string sReadLine contains a valid string of characters. I'm just not sure of the syntax to single only one of them out; the sReadLine[valPos + i] part is what I don't understand.
sReadLine : STRING;
valstring : STRING;
i : INT;
valPos : INT;
FOR i := 0 TO 20 DO
IF BYTE_TO_STRING(sReadLine[valPos + i]) = '"' THEN
EXIT;
END_IF
valstring := CONCAT(STR1 := valstring, STR2 := BYTE_TO_STRING(sReadLine[valPos + i]));
END_FOR
I think you have multiple choises.
1) Use built-in string functions instead. You can use MID function get get part of a string. So in your case something like "get one character from valPos + 1 from sReadLine.
FOR i := 0 TO 20 DO
IF MID(sReadLine, 1, valPos + i) = '"' THEN
EXIT;
END_IF
valstring := CONCAT(STR1 := valstring, STR2 := MID(sReadLine, 1, valPos + i));
END_FOR
2) Convert the ASCII byte to string. In TwinCAT systems, there is a function F_ToCHR. It takes a ASCII byte in and returns the character as string. I can't find something like that for Codesys, but i'm sure there would be a solution in some library. So please note that this won't work in Codesys without modifications:
FOR i := 0 TO 20 DO
IF F_ToCHR(sReadLine[valPos + i]) = '"' THEN
EXIT;
END_IF
valstring := CONCAT(STR1 := valstring, STR2 := F_ToCHR(sReadLine[valPos + i]));
END_FOR
3) The OSCAT library seems to have a CHR_TO_STRING function. You could use this instead of F_ToCHR in step 2.
4) You can use pointers to copy the ASCII byte to a string array (MemCpy) and add a string end character. This needs some knowledge of pointers etc. See Codesys forum for some example.
5) You can write a helper function similar to step 2 youself. Check the example from Codesys forums. That example doesn't include all characters so it needs to be updated. It's not quite elegant.
When you convert a byte to a string, what is beeing converted is the digital representation of the byte.
This means you are interpreting that byte as an ascii character (The ascii decimal value of : is 58).
So if you want to Concat chars instead of their ascii decimal representation, you need another function:
valstring := CONCAT(STR1 := valstring, STR2 := F_ToCHR(sReadLine[valPos + i]));
EDIT:
As Quirzo, I couldn't find a similar F_ToCHR function for Codesys, but you could easily build one yourself.
For example:
Declaration Part:
FUNCTION F_ASCII_TO_STRING : STRING
VAR_INPUT
input : BYTE;
END_VAR
VAR
ascii : ARRAY[0..255] OF STRING(1):=
[
33(' '),'!','"','#',
'$$' ,'%' ,'&' ,'´',
'(' ,')' ,'*' ,'+' ,
',' ,'-' ,'.' ,'/' ,
'0' ,'1' ,'2' ,'3' ,
'4' ,'5' ,'6' ,'7' ,
'8' ,'9' ,':' ,';' ,
'<' ,'=' ,'>' ,'?' ,
'#' ,'A' ,'B' ,'C' ,
'D' ,'E' ,'F' ,'G' ,
'H' ,'I' ,'J' ,'K' ,
'L' ,'M' ,'N' ,'O' ,
'P' ,'Q' ,'R' ,'S' ,
'T' ,'U' ,'V' ,'W' ,
'X' ,'Y' ,'Z' ,'[' ,
'\' ,']' ,'^' ,'_' ,
'`' ,'a' ,'b' ,'c' ,
'd' ,'e' ,'f' ,'g' ,
'h' ,'i' ,'j' ,'k' ,
'l' ,'m' ,'n' ,'o' ,
'p' ,'q' ,'r' ,'s' ,
't' ,'u' ,'v' ,'w' ,
'x' ,'y' ,'z' ,'{' ,
'|' ,'}' ,'~'
];
END_VAR
Implementation part:
F_ASCII_TO_STRING := ascii[input];
As Sergey said, this might not be an optimal solution to your problem. It seems like you want to extract the longest substring not containing any character " from initial input sReadLine to valstring, starting from position valPos.
In your implementation, for each valid input character, CONCAT() needs to search for the end of valstring, before appending only 1 character to it.
You should rather decompose your problem and use two standard functions to be optimal:
FIND() --> to get the position of the next character " (or to know if there is none),
MID() --> to create a string from initial position up to before the first character " (or the end of the input string).
That way, there remains only 2 loops; each one is hidden in these functions.
I am working with Swift's Decimal type, trying to ensure that an user-entered String is a valid Decimal.
I have two String values, each including a letter, within my Playground file. One of the values contains a letter at the start, while the other contains a letter at the end. I initialize a Decimal using each value, and only one Decimal initialization fails; the Decimal initialized with the value that contains the letter at the beginning.
Why does the Decimal initialized with a value that contains a letter at the end return a valid Decimal? I expect nil to be returned.
Attached is a screenshot from my Playground file.
It works this way because Decimal accepts any number values before the letters. The letters act as a terminator for any numbers that comes after it. So in your example:
12a = 12 ( a is the terminator in position 3 )
a12 = nil ( a is the terminator in position 1 )
If wanting both to be invalid if the string contains a letter then you could use Float instead.
I have a string variable. I need to convert all non-digit characters to spaces (" "). I have a problem with unicode characters. Unicode characters (the characters outside the basic charset) are converted to some invalid characters. See the code for example.
Is there any other way how to achieve the same result with procedure which would not choke on special unicode characters?
new file.
set unicode = yes.
show unicode.
data list free
/T (a10).
begin data
1234
5678
absd
12as
12(a
12(vi
12(vī
12āčž
end data.
string Z (a10).
comp Z = T.
loop #k = 1 to char.len(Z).
if ~range(char.sub(Z, #k, 1), "0", "9") sub(Z, #k, 1) = " ".
end loop.
comp Z = normalize(Z).
comp len = char.len(Z).
list var = all.
exe.
The result:
T Z len
1234 1234 4
5678 5678 4
absd 0
12as 12 2
12(a 12 2
12(vi 12 2
12(vī 12 � 6
>Warning # 649
>The first argument to the CHAR.SUBSTR function contains invalid characters.
>Command line: 1939 Current case: 8 Current splitfile group: 1
12āčž 12 �ž 7
Number of cases read: 8 Number of cases listed: 8
The substr function should not be used on the left hand side of an expression in Unicode mode, because the replacement character may not be the same number of bytes as the character(s) being replaced. Instead, use the replace function on the right hand side.
The corrupted characters you are seeing are due to this size mismatch.
How about instead of replacing non-numeric characters, you cycle though and pull out the numeric characters and rebuild Z? (Note my version here is pre CHAR. string functions.)
data list free
/T (a10).
begin data
1234
5678
absd
12as
12(a
12(vi
12(vī
12āčž
12as23
end data.
STRING Z (a10).
STRING #temp (A1).
COMPUTE #len = LENGTH(RTRIM(T)).
LOOP #i = 1 to #len.
COMPUTE #temp = SUBSTR(T,#i,1).
DO IF INDEX('0123456789',#temp) > 0.
COMPUTE Z = CONCAT(SUBSTR(Z,1,#i-1),#temp).
ELSE.
COMPUTE Z = CONCAT(SUBSTR(Z,1,#i-1)," ").
END IF.
END LOOP.
EXECUTE.
Is there a format string to truncate a number to a specific number of digits?
For example, any number greater than 5 digits i would like to truncate to 3 digits.
132456 -> 132
5000000 -> 500
#Erik : Format specifiers like %2d are specific to a language? I actually want to use it in javascript
Pseudo-Code
Function returning a String, receiving a String representing a Number as a parameter
IF the String has more than 5 characters
RETURN a substring containing the first 3 characters.
ELSE
RETURN the string received as a parameter
END IF
END Function
I assume you refer to printf format strings. I couldn't find anything that will truncate an integer argument (i.e. %d). But you can specify the maximum length of a string by referring to a string format string and specifying lengths via "%<MinLength>.<MaxLength>s".
So in your case you could turn your number arguments into strings and then use "%3.3s".