How does postgresql cast float to numeric? - postgresql

I was wondering how Postgresql converts floating point (float4) values to NUMERIC.
I chose 0.1 as a testing value. This value is not accurately representable in base2, see https://float.exposed/0x3dcccccd for a visualization. So the stored value 0x3dcccccd in hex for a float4 is actually not 0.1 but 0.100000001490116119385.
However, I do not understand the output of the following commands:
mydb=# SELECT '0.100000001490116119385'::float4::numeric(50,50);
numeric
------------------------------------------------------
0.10000000000000000000000000000000000000000000000000
(1 row)
mydb=# SELECT '0.1'::float4::numeric(50,50);
numeric
------------------------------------------------------
0.10000000000000000000000000000000000000000000000000
mydb=# SELECT '0.10000000000000000000000000000000001'::float4::numeric(50,50);
numeric
------------------------------------------------------
0.10000000000000000000000000000000000000000000000000
Why (and how) do I get 0.1 as a result in all cases? Both, 0.1 and 0.10000000000000000000000000000000001 cannot be accurately stored in a float4. The value that can be stored is 0.100000001490116119385 which is also the closest float4 value in both cases, but that's not what I get when casting to numeric. Why?

From the source code:
Datum
float4_numeric(PG_FUNCTION_ARGS)
{
float4 val = PG_GETARG_FLOAT4(0);
Numeric res;
NumericVar result;
char buf[FLT_DIG + 100];
if (isnan(val))
PG_RETURN_NUMERIC(make_result(&const_nan));
if (isinf(val))
{
if (val < 0)
PG_RETURN_NUMERIC(make_result(&const_ninf));
else
PG_RETURN_NUMERIC(make_result(&const_pinf));
}
snprintf(buf, sizeof(buf), "%.*g", FLT_DIG, val);
init_var(&result);
/* Assume we need not worry about leading/trailing spaces */
(void) set_var_from_str(buf, buf, &result);
res = make_result(&result);
free_var(&result);
PG_RETURN_NUMERIC(res);
}

Further explanation of Frank Heikens's answer
source code idea is get the float4 input. convert to char string, then convert to numeric.
Key function is
snprintf(buf, sizeof(buf), "%.*g", FLT_DIG, val);
FLT_DIG is equal to 6.
https://pubs.opengroup.org/onlinepubs/7908799/xsh/fprintf.html
An optional precision that gives the minimum number of digits to
appear for the d, i, o, u, x and X conversions; the number of digits
to appear after the radix character for the e, E and f conversions;
the maximum number of significant digits for the g and G conversions;
or the maximum number of bytes to be printed from a string in s and S
conversions. The precision takes the form of a period (.) followed
either by an asterisk (*), described below, or an optional decimal
digit string, where a null digit string is treated as 0. If a
precision appears with any other conversion character, the behaviour
is undefined.
float convert to text then to numeric processs: the text after decimal delimiter can only have 6 digits precision!
snprintf example: https://legacy.cplusplus.com/reference/cstdio/snprintf/
further post: Avoid trailing zeroes in printf()

Related

What are the supported Swift String format specifiers?

In Swift, I can format a String with format specifiers:
// This will return "0.120"
String(format: "%.03f", 0.12)
But the official documentation is not giving any information or link regarding the supported format specifiers or how to build a template similar to "%.03f": https://developer.apple.com/documentation/swift/string/3126742-init
It only says:
Returns a String object initialized by using a given format string as a template into which the remaining argument values are substituted.
The format specifiers for String formatting in Swift are the same as those in Objective-C NSString format, itself identical to those for CFString format and are buried deep in the archives of Apple Documentation (same content for both pages, both originally from year 2002 or older):
https://developer.apple.com/library/archive/documentation/CoreFoundation/Conceptual/CFStrings/formatSpecifiers.html
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/Strings/Articles/formatSpecifiers.html
But this documentation page itself is incomplete, for instance the flags, the precision specifiers and the width specifiers aren't mentioned. Actually, it claims to follow IEEE printf specifications (Issue 6, 2004 Edition), itself aligned with the ISO C standard. So those specifiers should be identical to what we have with C printf, with the addition of the %# specifier for Objective-C objects, and the addition of the poorly documented %D, %U, %O specifiers and q length modifier.
Specifiers
Each conversion specification is introduced by the '%' character or by the character sequence "%n$".
n is the index of the parameter, like in:
String(format: "%2$# %1$#", "world", "Hello")
Format Specifiers
%# Objective-C object, printed as the string returned by descriptionWithLocale: if available, or description otherwise.
Actually, you may also use some Swift types, but they must be defined inside the standard library in order to conform to the CVarArg protocol, and I believe they need to support bridging to Objective-C objects: https://developer.apple.com/documentation/foundation/object_runtime/classes_bridged_to_swift_standard_library_value_types.
String(format: "%#", ["Hello", "world"])
%% '%' character.
String(format: "100%% %#", true.description)
%d, %i Signed 32-bit integer (int).
String(format: "from %d to %d", Int32.min, Int32.max)
%u, %U, %D Unsigned 32-bit integer (unsigned int).
String(format: "from %u to %u", UInt32.min, UInt32.max)
%x Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0–9 and lowercase a–f.
String(format: "from %x to %x", UInt32.min, UInt32.max)
%X Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0–9 and uppercase A–F.
String(format: "from %X to %X", UInt32.min, UInt32.max)
%o, %O Unsigned 32-bit integer (unsigned int), printed in octal.
String(format: "from %o to %o", UInt32.min, UInt32.max)
%f 64-bit floating-point number (double), printed in decimal notation. Produces "inf", "infinity", or "nan".
String(format: "from %f to %f", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%F 64-bit floating-point number (double), printed in decimal notation. Produces "INF", "INFINITY", or "NAN".
String(format: "from %F to %F", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%e 64-bit floating-point number (double), printed in scientific notation using a lowercase e to introduce the exponent.
String(format: "from %e to %e", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%E 64-bit floating-point number (double), printed in scientific notation using an uppercase E to introduce the exponent.
String(format: "from %E to %E", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%g 64-bit floating-point number (double), printed in the style of %e if the exponent is less than –4 or greater than or equal to the precision, in the style of %f otherwise.
String(format: "from %g to %g", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%G 64-bit floating-point number (double), printed in the style of %E if the exponent is less than –4 or greater than or equal to the precision, in the style of %f otherwise.
String(format: "from %G to %G", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%c 8-bit unsigned character (unsigned char).
String(format: "from %c to %c", "a".utf8.first!, "z".utf8.first!)
%C 16-bit UTF-16 code unit (unichar).
String(format: "from %C to %C", "爱".utf16.first!, "终".utf16.first!)
%s Null-terminated array of 8-bit unsigned characters.
"Hello world".withCString {
String(format: "%s", $0)
}
%S Null-terminated array of 16-bit UTF-16 code units.
"Hello world".withCString(encodedAs: UTF16.self) {
String(format: "%S", $0)
}
%p Void pointer (void *), printed in hexadecimal with the digits 0–9 and lowercase a–f, with a leading 0x.
var hello = "world"
withUnsafePointer(to: &hello) {
String(format: "%p", $0)
}
%n The argument shall be a pointer to an integer into which is written the number of bytes written to the output so far by this call to one of the fprintf() functions.
The n format specifier seems unsupported in Swift 4+
%a 64-bit floating-point number (double), printed in scientific notation with a leading 0x and one hexadecimal digit before the decimal point using a lowercase p to introduce the exponent.
String(format: "from %a to %a", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%A 64-bit floating-point number (double), printed in scientific notation with a leading 0X and one hexadecimal digit before the decimal point using a uppercase P to introduce the exponent.
String(format: "from %A to %A", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
Flags
' The integer portion of the result of a
decimal conversion ( %i, %d, %u, %f, %F, %g, or %G ) shall be
formatted with thousands' grouping characters. For other conversions
the behavior is undefined. The non-monetary grouping character is
used.
The ' flag seems unsupported in Swift 4+
- The result of the conversion shall be left-justified within the field. The conversion is right-justified if
this flag is not specified.
String(format: "from %-12f to %-12d.", Double.leastNonzeroMagnitude, Int32.max)
+ The result of a signed conversion shall always begin with a sign ( '+' or '-' ). The conversion shall begin
with a sign only when a negative value is converted if this flag is
not specified.
String(format: "from %+f to %+d", Double.leastNonzeroMagnitude, Int32.max)
<space> If the first character of a signed
conversion is not a sign or if a signed conversion results in no
characters, a <space> shall be prefixed to the result. This means
that if the <space> and '+' flags both appear, the <space> flag
shall be ignored.
String(format: "from % d to % d.", Int32.min, Int32.max)
&num; Specifies that the value is to be converted to an alternative form. For o conversion, it increases the precision
(if necessary) to force the first digit of the result to be zero. For
x or X conversion specifiers, a non-zero result shall have 0x (or 0X)
prefixed to it. For a, A, e, E, f, F, g , and G conversion specifiers,
the result shall always contain a radix character, even if no digits
follow the radix character. Without this flag, a radix character
appears in the result of these conversions only if a digit follows it.
For g and G conversion specifiers, trailing zeros shall not be removed
from the result as they normally are. For other conversion specifiers,
the behavior is undefined.
String(format: "from %#a to %#x.", Double.leastNonzeroMagnitude, UInt32.max)
0 For d, i, o, u, x, X, a, A, e, E, f, F, g,
and G conversion specifiers, leading zeros (following any indication
of sign or base) are used to pad to the field width; no space padding
is performed. If the '0' and '-' flags both appear, the '0' flag is
ignored. For d, i, o, u, x, and X conversion specifiers, if a
precision is specified, the '0' flag is ignored. If the '0' and '"
flags both appear, the grouping characters are inserted before zero
padding. For other conversions, the behavior is undefined.
String(format: "from %012f to %012d.", Double.leastNonzeroMagnitude, Int32.max)
Width modifiers
If the converted value has fewer bytes than the field width, it shall be padded with spaces by default on the left; it shall be padded on the right if the left-adjustment flag ( '-' ) is given to the field width. The field width takes the form of an asterisk ( '*' ) or a decimal integer.
String(format: "from %12f to %*d.", Double.leastNonzeroMagnitude, 12, Int32.max)
Precision modifiers
An optional precision that gives the minimum number of digits to appear for the d, i, o, u, x, and X conversion specifiers; the number of digits to appear after the radix character for the a, A, e, E, f, and F conversion specifiers; the maximum number of significant digits for the g and G conversion specifiers; or the maximum number of bytes to be printed from a string in the s and S conversion specifiers. The precision takes the form of a period ( '.' ) followed either by an asterisk ( '*' ) or an optional decimal digit string, where a null digit string is treated as zero. If a precision appears with any other conversion specifier, the behavior is undefined.
String(format: "from %.12f to %.*d.", Double.leastNonzeroMagnitude, 12, Int32.max)
Length modifiers
h Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a short or unsigned
short argument.
String(format: "from %hd to %hu", CShort.min, CUnsignedShort.max)
hh Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a signed char or
unsigned char argument.
String(format: "from %hhd to %hhu", CChar.min, CUnsignedChar.max)
l Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a long or unsigned
long argument.
String(format: "from %ld to %lu", CLong.min, CUnsignedLong.max)
ll, q Length modifiers specifying that a
following d, o, u, x, or X conversion specifier applies to a long long
or unsigned long long argument.
String(format: "from %lld to %llu", CLongLong.min, CUnsignedLongLong.max)
L Length modifier specifying that a following
a, A, e, E, f, F, g, or G conversion specifier applies to a long
double argument.
I wasn't able to pass a CLongDouble argument to format in Swift 4+
z Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a size_t.
String(format: "from %zd to %zu", size_t.min, size_t.max)
t Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a ptrdiff_t.
String(format: "from %td to %tu", ptrdiff_t.min, ptrdiff_t.max)
j Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a intmax_t or
uintmax_t argument.
String(format: "from %jd to %ju", intmax_t.min, uintmax_t.max)
Probably supports all of below, which summarizes the format specifiers supported by NSString
(source).
The format specifiers supported by the NSString formatting methods and CFString formatting functions follow the IEEE printf specification; the specifiers are summarized in below (Table 1). Note that you can also use the "n$" positional specifiers such as %1$# %2$s. For more details, see the IEEE printf specification. You can also use these format specifiers with the NSLog function.
Format Specifiers (Table 1)
Specifier
Description
%#
Objective-C object, printed as the string returned by descriptionWithLocale: if available, or description otherwise. Also works with CFTypeRef objects, returning the result of the CFCopyDescription function.
%%
'%' character.
%d, %D
Signed 32-bit integer (int).
%u, %U
Unsigned 32-bit integer (unsigned int).
%x
Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0--9 and lowercase a--f.
%X
Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0--9 and uppercase A--F.
%o, %O
Unsigned 32-bit integer (unsigned int), printed in octal.
%f
64-bit floating-point number (double).
%e
64-bit floating-point number (double), printed in scientific notation using a lowercase e to introduce the exponent.
%E
64-bit floating-point number (double), printed in scientific notation using an uppercase E to introduce the exponent.
%g
64-bit floating-point number (double), printed in the style of %e if the exponent is less than --4 or greater than or equal to the precision, in the style of %f otherwise.
%G
64-bit floating-point number (double), printed in the style of %E if the exponent is less than --4 or greater than or equal to the precision, in the style of %f otherwise.
%c
8-bit unsigned character (unsigned char).
%C
16-bit UTF-16 code unit (unichar).
%s
Null-terminated array of 8-bit unsigned characters. Because the %s specifier causes the characters to be interpreted in the system default encoding, the results can be variable, especially with right-to-left languages. For example, with RTL, %s inserts direction markers when the characters are not strongly directional. For this reason, it's best to avoid %s and specify encodings explicitly.
%S
Null-terminated array of 16-bit UTF-16 code units.
%p
Void pointer (void *), printed in hexadecimal with the digits 0--9 and lowercase a--f, with a leading 0x.
%a
64-bit floating-point number (double), printed in scientific notation with a leading 0x and one hexadecimal digit before the decimal point using a lowercase p to introduce the exponent.
%A
64-bit floating-point number (double), printed in scientific notation with a leading 0X and one hexadecimal digit before the decimal point using a uppercase P to introduce the exponent.
%F
64-bit floating-point number (double), printed in decimal notation.
Length modifiers (Table 2)
Length modifier
Description
h
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a short or unsigned short argument.
hh
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a signed char or unsigned char argument.
l
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a long or unsigned long argument.
ll, q
Length modifiers specifying that a following d, o, u, x, or X conversion specifier applies to a long long or unsigned long longargument.
L
Length modifier specifying that a following a, A, e, E, f, F, g, or G conversion specifier applies to a long double argument.
z
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a size_t.
t
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a ptrdiff_t.
j
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a intmax_t or uintmax_t argument.
Length modifier example:
#include <limits.h>
// ...
NSString *text = [NSString stringWithFormat:#"Signed %hhd, Unsigned %hhu", CHAR_MIN, UCHAR_MAX];
Or in Swift:
let text = String(format: "Signed %hhd, Unsigned %hhu", CChar.min, CUnsignedChar.max)
Platform Dependencies
OS X uses several data types---NSInteger, NSUInteger,CGFloat, and CFIndex---to provide a consistent means of representing values in 32- and 64-bit environments. In a 32-bit environment, NSInteger and NSUInteger are defined as int and unsigned int, respectively. In 64-bit environments, NSIntegerand NSUInteger are defined as long and unsigned long, respectively. To avoid the need to use different printf-style type specifiers depending on the platform, you can use the specifiers shown in Table 3. Note that in some cases you may have to cast the value.
Table 3  Format specifiers for data types
Type
Format specifier
Considerations
NSInteger
%ld or %lx
Cast the value to long.
NSUInteger
%lu or %lx
Cast the value to unsigned long.
CGFloat
%f or %g
%f works for floats and doubles when formatting; but note the technique described below for scanning.
CFIndex
%ld or %lx
The same as NSInteger.
pointer
%p or %zx
%p adds 0x to the beginning of the output. If you don't want that, use %zx and no typecast.
The following example illustrates the use of %ld to format an NSInteger and the use of a cast.
NSInteger i = 42;
printf("%ld\n", (long)i);
In addition to the considerations mentioned in Table 3, there is one extra case with scanning:
you must distinguish the types for float and double.
You should use %f for float, %lf for double.
If you need to use scanf (or a variant thereof) with CGFloat, switch to double instead,
and copy the double to CGFloat.
CGFloat imageWidth;
double tmp;
sscanf (str, "%lf", &tmp);
imageWidth = tmp;
It is important to remember that %lf does not represent CGFloat correctly on either 32- or 64-bit platforms.
This is unlike %ld, which works for long in all cases.
Swift 5
let timeFormatter = DateFormatter()
let dateFormat = DateFormatter.dateFormat(fromTemplate: "jm",
options: 0, locale: Locale.current)!
timeFormatter.dateFormat = dateFormat

Converting number in scientific notation to int

Could someone explain why I can not use int() to convert an integer number represented in string-scientific notation into a python int?
For example this does not work:
print int('1e1')
But this does:
print int(float('1e1'))
print int(1e1) # Works
Why does int not recognise the string as an integer? Surely its as simple as checking the sign of the exponent?
Behind the scenes a scientific number notation is always represented as a float internally. The reason is the varying number range as an integer only maps to a fixed value range, let's say 2^32 values. The scientific representation is similar to the floating representation with significant and exponent. Further details you can lookup in https://en.wikipedia.org/wiki/Floating_point.
You cannot cast a scientific number representation as string to integer directly.
print int(1e1) # Works
Works because 1e1 as a number is already a float.
>>> type(1e1)
<type 'float'>
Back to your question: We want to get an integer from float or scientific string. Details: https://docs.python.org/2/reference/lexical_analysis.html#integers
>>> int("13.37")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: '13.37'
For float or scientific representations you have to use the intermediate step over float.
Very Simple Solution
print(int(float(1e1)))
Steps:-
1- First you convert Scientific value to float.
2- Convert that float value to int .
3- Great you are able to get finally int data type.
Enjoy.
Because in Python (at least in 2.x since I do not use Python 3.x), int() behaves differently on strings and numeric values. If you input a string, then python will try to parse it to base 10 int
int ("077")
>> 77
But if you input a valid numeric value, then python will interpret it according to its base and type and convert it to base 10 int. then python will first interperet 077 as base 8 and convert it to base 10 then int() will jsut display it.
int (077) # Leading 0 defines a base 8 number.
>> 63
077
>> 63
So, int('1e1') will try to parse 1e1 as a base 10 string and will throw ValueError. But 1e1 is a numeric value (mathematical expression):
1e1
>> 10.0
So int will handle it as a numeric value and handle it as though, converting it to float(10.0) and then parse it to int. So Python will first interpret 1e1 since it was a numric value and evaluate 10.0 and int() will convert it to integer.
So calling int() with a string value, you must be sure that string is a valid base 10 integer value.
int(float(1e+001)) will work.
Whereas like what others had mention 1e1 is already a float.

xor between two numbers (after hex to binary conversion)

i donot know why there is error in this coding:
hex_str1 = '5'
bin_str1 = dec2bin(hex2dec(hex_str1))
hex_str2 = '4'
bin_str2 = dec2bin(hex2dec(hex_str2))
c=xor(bin_str1,bin_str2)
the value of c is not correct when i transform the hex to binary by using the xor function.but when i used the array the value of c is correct.the coding is
e=[1 1 1 0];
f=[1 0 1 0];
g=xor(e,f)
what are the mistake in my first coding to xor of hec to binary value??anyone can help me find the solution...
Your mistake is applying xor on two strings instead of actual numerical arrays.
For the xor command, logical "0"s are represented by actual zero elements. Any non-zero elements are interpreted as logical "1"s.
When you apply xor on two strings, the numerical value of each character (element) is its ASCII value. From xor's point of view, the zeroes in your string are not really zeroes, but simply non-zero values (being equal to the ASCII value of the character '0'), which are interpreted as logical "1"s. The bottom line is that in your example you're xor-ing 111b and 111b, and so the result is 0.
The solution is to convert your strings to logical arrays:
num1 = (bin_str1 == '1');
num2 = (bin_str2 == '1');
c = xor(num1, num2);
To convert the result back into a string (of a binary number), use this:
bin_str3 = sprintf('%d', c);
... and to a hexadecimal string, add this:
hex_str3 = dec2hex(bin2dec(bin_str3));
it is really helpful, and give me the correct conversion while forming HMAC value in matlab...
but in matlab you can not convert string of length more than 52 character using bin2dec() function and similarly hex2dec() can not take hexadecimal character string more than 13 length.

Insert ASCII in LSB

Does anyone know a efficient method in order to insert the ASCII value of some characters in the 8 least significant bits (LSB) of a 16 bit number?
The only idea that comes up in my mind is to convert both numbers to binary, then replace the last 8 characters, from 16 bit number, by the ASCII value in 8 bits. But as far as I know string operations are very expensive in computational time.
Thanks
I don't know Matlab syntax, but in C, it would be something like this:
short x; // a 16-bit integer in many implementations
... do whatever you need to to x ...
char a = 'a'; // some character
x = (x & 0xFF00) | (short)(a & 0x00FF);
The & operator is the arithmetic "and" operator. The | operator is the arithmetic "or" operator. Numbers beginning with 0x are in hexadecimal for easy readability.
Here is a MATLAB implementation of #user1118321 idea:
%# 16-bit integer number
x = uint16(30000);
%# character
c = 'a';
%# replace lower 8-bit
y = bitand(x,hex2dec('FF00'),class(x)) + cast(c-0,class(x))

format specifier for long double (I want to truncate the 0's after decimal)

I have a 15-digit floating-point number and I need to truncate the trailing zeros after the decimal point. Is there a format specifier for that?
%Lg is probably what you want: see http://developer.apple.com/library/ios/#DOCUMENTATION/System/Conceptual/ManPages_iPhoneOS/man3/printf.3.html.
Unfortunately in C there is no format specifier that seems to meet all the requirements you have. %Lg is the closest but as you noted it switched to scientific notation at its discretion. %Lf won't work by itself because it won't remove the trailing zeroes.
What you're going to have to do is print the fixed format number to a buffer and then manually remove the zeroes with string editing (which can STILL be tricky if you have rounding errors and numbers like 123.100000009781).
Is this what you want:
#include <iostream>
#include <iomanip>
int main()
{
double doubleValue = 78998.9878000000000;
std::cout << std::setprecision(15) << doubleValue << std::endl;
}
Output:
78998.9878
Note that trailing zeros after the decimal point are truncated!
Online Demo : http://www.ideone.com/vRFlQ
You could print the format specifier as a string, filling in the appropriate amount of digits if you can determine how many:
sprintf(fmt, "%%.%dlf", digits);
printf(fmt, number);
or, just checking trailing 0 characters:
sprintf(fmt, "%.15lf", 2.123);
truncate(fmt);
printf("%s", fmt);
truncate(char * fmt) {
int i = strlen(fmt);
while (fmt[--i] == '0' && i != 0);
fmt[i+1] = '\0';
}
%.15g — the 15 being the maximum number of significant digits required in the string (not the number of decimal places)
1.012345678900000 => 1.0123456789
12.012345678900000 => 12.0123456789
123.012345678900000 => 123.0123456789
1234.012345678900000 => 1234.0123456789
12345.012345678900000 => 12345.0123456789
123456.012345678900000 => 123456.012345679