What are the supported Swift String format specifiers? - swift

In Swift, I can format a String with format specifiers:
// This will return "0.120"
String(format: "%.03f", 0.12)
But the official documentation is not giving any information or link regarding the supported format specifiers or how to build a template similar to "%.03f": https://developer.apple.com/documentation/swift/string/3126742-init
It only says:
Returns a String object initialized by using a given format string as a template into which the remaining argument values are substituted.

The format specifiers for String formatting in Swift are the same as those in Objective-C NSString format, itself identical to those for CFString format and are buried deep in the archives of Apple Documentation (same content for both pages, both originally from year 2002 or older):
https://developer.apple.com/library/archive/documentation/CoreFoundation/Conceptual/CFStrings/formatSpecifiers.html
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/Strings/Articles/formatSpecifiers.html
But this documentation page itself is incomplete, for instance the flags, the precision specifiers and the width specifiers aren't mentioned. Actually, it claims to follow IEEE printf specifications (Issue 6, 2004 Edition), itself aligned with the ISO C standard. So those specifiers should be identical to what we have with C printf, with the addition of the %# specifier for Objective-C objects, and the addition of the poorly documented %D, %U, %O specifiers and q length modifier.
Specifiers
Each conversion specification is introduced by the '%' character or by the character sequence "%n$".
n is the index of the parameter, like in:
String(format: "%2$# %1$#", "world", "Hello")
Format Specifiers
%# Objective-C object, printed as the string returned by descriptionWithLocale: if available, or description otherwise.
Actually, you may also use some Swift types, but they must be defined inside the standard library in order to conform to the CVarArg protocol, and I believe they need to support bridging to Objective-C objects: https://developer.apple.com/documentation/foundation/object_runtime/classes_bridged_to_swift_standard_library_value_types.
String(format: "%#", ["Hello", "world"])
%% '%' character.
String(format: "100%% %#", true.description)
%d, %i Signed 32-bit integer (int).
String(format: "from %d to %d", Int32.min, Int32.max)
%u, %U, %D Unsigned 32-bit integer (unsigned int).
String(format: "from %u to %u", UInt32.min, UInt32.max)
%x Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0–9 and lowercase a–f.
String(format: "from %x to %x", UInt32.min, UInt32.max)
%X Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0–9 and uppercase A–F.
String(format: "from %X to %X", UInt32.min, UInt32.max)
%o, %O Unsigned 32-bit integer (unsigned int), printed in octal.
String(format: "from %o to %o", UInt32.min, UInt32.max)
%f 64-bit floating-point number (double), printed in decimal notation. Produces "inf", "infinity", or "nan".
String(format: "from %f to %f", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%F 64-bit floating-point number (double), printed in decimal notation. Produces "INF", "INFINITY", or "NAN".
String(format: "from %F to %F", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%e 64-bit floating-point number (double), printed in scientific notation using a lowercase e to introduce the exponent.
String(format: "from %e to %e", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%E 64-bit floating-point number (double), printed in scientific notation using an uppercase E to introduce the exponent.
String(format: "from %E to %E", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%g 64-bit floating-point number (double), printed in the style of %e if the exponent is less than –4 or greater than or equal to the precision, in the style of %f otherwise.
String(format: "from %g to %g", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%G 64-bit floating-point number (double), printed in the style of %E if the exponent is less than –4 or greater than or equal to the precision, in the style of %f otherwise.
String(format: "from %G to %G", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%c 8-bit unsigned character (unsigned char).
String(format: "from %c to %c", "a".utf8.first!, "z".utf8.first!)
%C 16-bit UTF-16 code unit (unichar).
String(format: "from %C to %C", "爱".utf16.first!, "终".utf16.first!)
%s Null-terminated array of 8-bit unsigned characters.
"Hello world".withCString {
String(format: "%s", $0)
}
%S Null-terminated array of 16-bit UTF-16 code units.
"Hello world".withCString(encodedAs: UTF16.self) {
String(format: "%S", $0)
}
%p Void pointer (void *), printed in hexadecimal with the digits 0–9 and lowercase a–f, with a leading 0x.
var hello = "world"
withUnsafePointer(to: &hello) {
String(format: "%p", $0)
}
%n The argument shall be a pointer to an integer into which is written the number of bytes written to the output so far by this call to one of the fprintf() functions.
The n format specifier seems unsupported in Swift 4+
%a 64-bit floating-point number (double), printed in scientific notation with a leading 0x and one hexadecimal digit before the decimal point using a lowercase p to introduce the exponent.
String(format: "from %a to %a", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
%A 64-bit floating-point number (double), printed in scientific notation with a leading 0X and one hexadecimal digit before the decimal point using a uppercase P to introduce the exponent.
String(format: "from %A to %A", Double.leastNonzeroMagnitude, Double.greatestFiniteMagnitude)
Flags
' The integer portion of the result of a
decimal conversion ( %i, %d, %u, %f, %F, %g, or %G ) shall be
formatted with thousands' grouping characters. For other conversions
the behavior is undefined. The non-monetary grouping character is
used.
The ' flag seems unsupported in Swift 4+
- The result of the conversion shall be left-justified within the field. The conversion is right-justified if
this flag is not specified.
String(format: "from %-12f to %-12d.", Double.leastNonzeroMagnitude, Int32.max)
+ The result of a signed conversion shall always begin with a sign ( '+' or '-' ). The conversion shall begin
with a sign only when a negative value is converted if this flag is
not specified.
String(format: "from %+f to %+d", Double.leastNonzeroMagnitude, Int32.max)
<space> If the first character of a signed
conversion is not a sign or if a signed conversion results in no
characters, a <space> shall be prefixed to the result. This means
that if the <space> and '+' flags both appear, the <space> flag
shall be ignored.
String(format: "from % d to % d.", Int32.min, Int32.max)
&num; Specifies that the value is to be converted to an alternative form. For o conversion, it increases the precision
(if necessary) to force the first digit of the result to be zero. For
x or X conversion specifiers, a non-zero result shall have 0x (or 0X)
prefixed to it. For a, A, e, E, f, F, g , and G conversion specifiers,
the result shall always contain a radix character, even if no digits
follow the radix character. Without this flag, a radix character
appears in the result of these conversions only if a digit follows it.
For g and G conversion specifiers, trailing zeros shall not be removed
from the result as they normally are. For other conversion specifiers,
the behavior is undefined.
String(format: "from %#a to %#x.", Double.leastNonzeroMagnitude, UInt32.max)
0 For d, i, o, u, x, X, a, A, e, E, f, F, g,
and G conversion specifiers, leading zeros (following any indication
of sign or base) are used to pad to the field width; no space padding
is performed. If the '0' and '-' flags both appear, the '0' flag is
ignored. For d, i, o, u, x, and X conversion specifiers, if a
precision is specified, the '0' flag is ignored. If the '0' and '"
flags both appear, the grouping characters are inserted before zero
padding. For other conversions, the behavior is undefined.
String(format: "from %012f to %012d.", Double.leastNonzeroMagnitude, Int32.max)
Width modifiers
If the converted value has fewer bytes than the field width, it shall be padded with spaces by default on the left; it shall be padded on the right if the left-adjustment flag ( '-' ) is given to the field width. The field width takes the form of an asterisk ( '*' ) or a decimal integer.
String(format: "from %12f to %*d.", Double.leastNonzeroMagnitude, 12, Int32.max)
Precision modifiers
An optional precision that gives the minimum number of digits to appear for the d, i, o, u, x, and X conversion specifiers; the number of digits to appear after the radix character for the a, A, e, E, f, and F conversion specifiers; the maximum number of significant digits for the g and G conversion specifiers; or the maximum number of bytes to be printed from a string in the s and S conversion specifiers. The precision takes the form of a period ( '.' ) followed either by an asterisk ( '*' ) or an optional decimal digit string, where a null digit string is treated as zero. If a precision appears with any other conversion specifier, the behavior is undefined.
String(format: "from %.12f to %.*d.", Double.leastNonzeroMagnitude, 12, Int32.max)
Length modifiers
h Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a short or unsigned
short argument.
String(format: "from %hd to %hu", CShort.min, CUnsignedShort.max)
hh Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a signed char or
unsigned char argument.
String(format: "from %hhd to %hhu", CChar.min, CUnsignedChar.max)
l Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a long or unsigned
long argument.
String(format: "from %ld to %lu", CLong.min, CUnsignedLong.max)
ll, q Length modifiers specifying that a
following d, o, u, x, or X conversion specifier applies to a long long
or unsigned long long argument.
String(format: "from %lld to %llu", CLongLong.min, CUnsignedLongLong.max)
L Length modifier specifying that a following
a, A, e, E, f, F, g, or G conversion specifier applies to a long
double argument.
I wasn't able to pass a CLongDouble argument to format in Swift 4+
z Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a size_t.
String(format: "from %zd to %zu", size_t.min, size_t.max)
t Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a ptrdiff_t.
String(format: "from %td to %tu", ptrdiff_t.min, ptrdiff_t.max)
j Length modifier specifying that a following
d, o, u, x, or X conversion specifier applies to a intmax_t or
uintmax_t argument.
String(format: "from %jd to %ju", intmax_t.min, uintmax_t.max)

Probably supports all of below, which summarizes the format specifiers supported by NSString
(source).
The format specifiers supported by the NSString formatting methods and CFString formatting functions follow the IEEE printf specification; the specifiers are summarized in below (Table 1). Note that you can also use the "n$" positional specifiers such as %1$# %2$s. For more details, see the IEEE printf specification. You can also use these format specifiers with the NSLog function.
Format Specifiers (Table 1)
Specifier
Description
%#
Objective-C object, printed as the string returned by descriptionWithLocale: if available, or description otherwise. Also works with CFTypeRef objects, returning the result of the CFCopyDescription function.
%%
'%' character.
%d, %D
Signed 32-bit integer (int).
%u, %U
Unsigned 32-bit integer (unsigned int).
%x
Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0--9 and lowercase a--f.
%X
Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0--9 and uppercase A--F.
%o, %O
Unsigned 32-bit integer (unsigned int), printed in octal.
%f
64-bit floating-point number (double).
%e
64-bit floating-point number (double), printed in scientific notation using a lowercase e to introduce the exponent.
%E
64-bit floating-point number (double), printed in scientific notation using an uppercase E to introduce the exponent.
%g
64-bit floating-point number (double), printed in the style of %e if the exponent is less than --4 or greater than or equal to the precision, in the style of %f otherwise.
%G
64-bit floating-point number (double), printed in the style of %E if the exponent is less than --4 or greater than or equal to the precision, in the style of %f otherwise.
%c
8-bit unsigned character (unsigned char).
%C
16-bit UTF-16 code unit (unichar).
%s
Null-terminated array of 8-bit unsigned characters. Because the %s specifier causes the characters to be interpreted in the system default encoding, the results can be variable, especially with right-to-left languages. For example, with RTL, %s inserts direction markers when the characters are not strongly directional. For this reason, it's best to avoid %s and specify encodings explicitly.
%S
Null-terminated array of 16-bit UTF-16 code units.
%p
Void pointer (void *), printed in hexadecimal with the digits 0--9 and lowercase a--f, with a leading 0x.
%a
64-bit floating-point number (double), printed in scientific notation with a leading 0x and one hexadecimal digit before the decimal point using a lowercase p to introduce the exponent.
%A
64-bit floating-point number (double), printed in scientific notation with a leading 0X and one hexadecimal digit before the decimal point using a uppercase P to introduce the exponent.
%F
64-bit floating-point number (double), printed in decimal notation.
Length modifiers (Table 2)
Length modifier
Description
h
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a short or unsigned short argument.
hh
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a signed char or unsigned char argument.
l
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a long or unsigned long argument.
ll, q
Length modifiers specifying that a following d, o, u, x, or X conversion specifier applies to a long long or unsigned long longargument.
L
Length modifier specifying that a following a, A, e, E, f, F, g, or G conversion specifier applies to a long double argument.
z
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a size_t.
t
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a ptrdiff_t.
j
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a intmax_t or uintmax_t argument.
Length modifier example:
#include <limits.h>
// ...
NSString *text = [NSString stringWithFormat:#"Signed %hhd, Unsigned %hhu", CHAR_MIN, UCHAR_MAX];
Or in Swift:
let text = String(format: "Signed %hhd, Unsigned %hhu", CChar.min, CUnsignedChar.max)
Platform Dependencies
OS X uses several data types---NSInteger, NSUInteger,CGFloat, and CFIndex---to provide a consistent means of representing values in 32- and 64-bit environments. In a 32-bit environment, NSInteger and NSUInteger are defined as int and unsigned int, respectively. In 64-bit environments, NSIntegerand NSUInteger are defined as long and unsigned long, respectively. To avoid the need to use different printf-style type specifiers depending on the platform, you can use the specifiers shown in Table 3. Note that in some cases you may have to cast the value.
Table 3  Format specifiers for data types
Type
Format specifier
Considerations
NSInteger
%ld or %lx
Cast the value to long.
NSUInteger
%lu or %lx
Cast the value to unsigned long.
CGFloat
%f or %g
%f works for floats and doubles when formatting; but note the technique described below for scanning.
CFIndex
%ld or %lx
The same as NSInteger.
pointer
%p or %zx
%p adds 0x to the beginning of the output. If you don't want that, use %zx and no typecast.
The following example illustrates the use of %ld to format an NSInteger and the use of a cast.
NSInteger i = 42;
printf("%ld\n", (long)i);
In addition to the considerations mentioned in Table 3, there is one extra case with scanning:
you must distinguish the types for float and double.
You should use %f for float, %lf for double.
If you need to use scanf (or a variant thereof) with CGFloat, switch to double instead,
and copy the double to CGFloat.
CGFloat imageWidth;
double tmp;
sscanf (str, "%lf", &tmp);
imageWidth = tmp;
It is important to remember that %lf does not represent CGFloat correctly on either 32- or 64-bit platforms.
This is unlike %ld, which works for long in all cases.

Swift 5
let timeFormatter = DateFormatter()
let dateFormat = DateFormatter.dateFormat(fromTemplate: "jm",
options: 0, locale: Locale.current)!
timeFormatter.dateFormat = dateFormat

Related

How does postgresql cast float to numeric?

I was wondering how Postgresql converts floating point (float4) values to NUMERIC.
I chose 0.1 as a testing value. This value is not accurately representable in base2, see https://float.exposed/0x3dcccccd for a visualization. So the stored value 0x3dcccccd in hex for a float4 is actually not 0.1 but 0.100000001490116119385.
However, I do not understand the output of the following commands:
mydb=# SELECT '0.100000001490116119385'::float4::numeric(50,50);
numeric
------------------------------------------------------
0.10000000000000000000000000000000000000000000000000
(1 row)
mydb=# SELECT '0.1'::float4::numeric(50,50);
numeric
------------------------------------------------------
0.10000000000000000000000000000000000000000000000000
mydb=# SELECT '0.10000000000000000000000000000000001'::float4::numeric(50,50);
numeric
------------------------------------------------------
0.10000000000000000000000000000000000000000000000000
Why (and how) do I get 0.1 as a result in all cases? Both, 0.1 and 0.10000000000000000000000000000000001 cannot be accurately stored in a float4. The value that can be stored is 0.100000001490116119385 which is also the closest float4 value in both cases, but that's not what I get when casting to numeric. Why?
From the source code:
Datum
float4_numeric(PG_FUNCTION_ARGS)
{
float4 val = PG_GETARG_FLOAT4(0);
Numeric res;
NumericVar result;
char buf[FLT_DIG + 100];
if (isnan(val))
PG_RETURN_NUMERIC(make_result(&const_nan));
if (isinf(val))
{
if (val < 0)
PG_RETURN_NUMERIC(make_result(&const_ninf));
else
PG_RETURN_NUMERIC(make_result(&const_pinf));
}
snprintf(buf, sizeof(buf), "%.*g", FLT_DIG, val);
init_var(&result);
/* Assume we need not worry about leading/trailing spaces */
(void) set_var_from_str(buf, buf, &result);
res = make_result(&result);
free_var(&result);
PG_RETURN_NUMERIC(res);
}
Further explanation of Frank Heikens's answer
source code idea is get the float4 input. convert to char string, then convert to numeric.
Key function is
snprintf(buf, sizeof(buf), "%.*g", FLT_DIG, val);
FLT_DIG is equal to 6.
https://pubs.opengroup.org/onlinepubs/7908799/xsh/fprintf.html
An optional precision that gives the minimum number of digits to
appear for the d, i, o, u, x and X conversions; the number of digits
to appear after the radix character for the e, E and f conversions;
the maximum number of significant digits for the g and G conversions;
or the maximum number of bytes to be printed from a string in s and S
conversions. The precision takes the form of a period (.) followed
either by an asterisk (*), described below, or an optional decimal
digit string, where a null digit string is treated as 0. If a
precision appears with any other conversion character, the behaviour
is undefined.
float convert to text then to numeric processs: the text after decimal delimiter can only have 6 digits precision!
snprintf example: https://legacy.cplusplus.com/reference/cstdio/snprintf/
further post: Avoid trailing zeroes in printf()

How to convert to UInt64 from a string in Powershell? String-to-number conversion

Consider the following Powershell snippet:
[Uint64] $Memory = 1GB
[string] $MemoryFromString = "1GB"
[Uint64] $ConvertedMemory = [Convert]::ToUInt64($MemoryFromString)
The 3rd Line fails with:
Exception calling "ToUInt64" with "1" argument(s): "Input string was not in a correct format."
At line:1 char:1
+ [Uint64]$ConvertedMemory = [Convert]::ToUInt64($MemoryFromString)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : FormatException
If I check the contents of $Memory:
PS C:\> $Memory
1073741824
That works fine.
So, how do I convert the value "1GB" from a string to a UInt64 in Powershell?
To complement Sean's helpful answer:
It is only the type constraint of your result variable ([uint64] $ConvertedMemory = ...) that ensures that ($MemoryFromString / 1) is converted to [uint64] ([System.UInt64]).
The result of expression $MemoryFromString / 1 is actually of type [int] ([System.Int32]):
PS> ('1gb' / 1).GetType().FullName
System.Int32
Therefore, to ensure that the expression by itself returns an [uint64] instance, you'd have to use a cast:
PS> ([uint64] ('1gb' / 1)).GetType().FullName
System.Int64
Note the required (...) around the calculation, as the [uint64] cast would otherwise apply to '1gb' only (and therefore fail).
Alternatively, ('1gb' / [uint64] 1) works too.
Note:
'1gb' - 0 would have worked too,
but not '1gb' * 1' (effectively a no-op) or '1gb' + 0 (results in string '1gb0'), because operators * and + with a string-typed LHS perform string operations (replication and concatenation, respectively).
Automatic string-to-number conversion and number literals in PowerShell:
When PowerShell performs implicit number conversion, including when performing mixed-numeric-type calculations and parsing number literals in source code, it conveniently auto-selects a numeric type that is "large" enough to hold the result.
In implicit string-to-number conversions, PowerShell conveniently recognizes the same formats as supported in number literals in source code:
number-base prefixes (for integers only): 0x for hexadecimal integers, and 0b for binary integers (PowerShell [Core] 7.0+)
number-type suffixes: L for [long] ([System.Int64]), and D for [decimal] ([System.Decimal]); e.g., '1L' - 0 yields a [long].
Note that C# uses M instead of D and instead uses D to designate [System.Double]; also, C# supports several additional suffixes.
PowerShell [Core] 6.2+ now supports additional suffixes: Y ([sbyte]), UY ([byte]), S ([int16]), US ([uint16]), U ([uint32] or [uint64], on demand), and UL ([uint64]).
PowerShell [Core] 7.0+ additionally suports suffix n ([bigint])
You can keep an eye on future developments, if any, via the official help topic, about_Numeric_Literals.
floating-point representations such as 1.23 (decimal only); note that PowerShell only ever recognizes . as the decimal mark, irrespective of the current culture.
exponential notation (decimal only); e.g., '1.0e3' - 1 yields 999.
its own binary-multiplier suffixes, kb, mb, gb, tb, pb (for multipliers [math]::pow(2, 10) == 1024, [math]::pow(2, 20) == 1048576, ...); e.g., '1kb' - 1 yields 1023; note that theses suffixes are PowerShell-specific, so the .NET framework number-parsing methods do not recognize them.
The number-conversion rules are complex, but here are some key points:
This is based on my own experiments. Do tell me if I'm wrong.
Types are expressed by their PS type accelerators and map onto .NET types as follows:
[int] ... [System.Int32]
[long] ... [System.Int64]
[decimal] ... [System.Decimal]
[float] ... [System.Single]
[double] ... [System.Double]
PowerShell never auto-selects an unsigned integer type.
Note: In PowerShell [Core] 6.2+, you can use type suffix US, U or UL (see above) to force treatment as an unsigned type (positive number); e.g., 0xffffffffffffffffU
This can be unexpected with hexadecimal number literals; e.g., [uint32] 0xffffffff fails, because 0xffffffff is first - implicitly - converted to signed type [int32], which yields -1, which, as a signed value, cannot then be cast to unsigned type [uint32].
Workarounds:
Append L to force interpretation as an [int64] first, which results in expected positive value 4294967295, in which case the cast to [uint32] succeeds.
That technique doesn't work for values above 0x7fffffffffffffff ([long]::maxvalue), however, in which case you can use string conversion: [uint64] '0xffffffffffffffff'
PowerShell widens integer types as needed:
For decimal integer literals / strings, widening goes beyond integer types to [System.Decimal], and then [Double], as needed; e.g.:
(2147483648).GetType().Name yields Int64, because the value is [int32]::MaxValue + 1, and was therefore implicitly widened to [int64].
(9223372036854775808).GetType().Name yields Decimal, because the value is [int64]::MaxValue + 1, and was therefore implicitly widened to [decimal].
(79228162514264337593543950336).GetType().Name yields Double, because the value is [decimal]::MaxValue + 1, and was therefore implicitly widened to [double].
For hexadecimal (invariably integer) literals / strings, widening stops at [int64]:
(0x100000000).gettype().name yields Int64, because the value is [int32]::MaxValue + 1, and was therefore implicitly widened to [int64].
0x10000000000000000, which is [int64]::MaxValue + 1, does not get promoted to [System.Decimal] due to being hexadecimal and interpretation as a number therefore fails.
Note: The above rules apply to individual literals / strings, but widening in expressions may result in widening to [double] right away (without considering [decimal]) - see below.
PowerShell seemingly never auto-selects an integer type smaller than [int]:
('1' - 0).GetType().FullName yields System.Int32 (an [int]), even though integer 1 would fit into [int16] or even [byte].
The result of a calculation never uses a smaller type than either of the operands:
Both 1 + [long] 1 and [long] 1 + 1 yield a [long] (even though the result could fit into a smaller type).
Perhaps unexpectedly, PowerShell auto-selects floating-point type [double] for a calculation result that is larger than either operand's type integer type can fit, even if the result could fit into a larger integer type:
([int]::maxvalue + 1).GetType().FullName yields System.Double (a [double]), even though the result would fit into a [long] integer.
If one of the operands is a large-enough integer type, however, the result is of that type: ([int]::maxvalue + [long] 1).GetType().FullName yields System.Int64 (a [long]).
Involving at least one floating-point type in a calculation always results in [double], even when mixed with an integer type or using all-[float] operands:
1 / 1.0 and 1.0 / 1 and 1 / [float] 1 and [float] 1 / 1 and [float] 1 / [float] 1 all yield a [double]
Number literals in source code that don't use a type suffix:
Decimal integer literals are interpreted as the smallest of the following types that can fit the value: [int] > [long] > [decimal] > [double](!):
1 yields an [int] (as stated, [int] is the smallest auto-selected type)
214748364 (1 higher than [int]::maxvalue) yields a [long]
9223372036854775808 (1 higher than [long]::maxvalue) yields a [decimal]
79228162514264337593543950336 (1 higher than [decimal]::maxvalue) yields a [double]
Hexadecimal integer literals are interpreted as the smallest of the following types that can fit the value: [int] > [long]; that is, unlike with decimal literals, types larger than [long] aren't supported; Caveat: values that have the high bit set result in negative decimal numbers, because PowerShell auto-selects signed integer types:
0x1 yields an [int]
0x80000000 yields an [int] that is a negative value, because the high bit is set: -2147483648, which is the smallest [int] number, if you consider the sign ([int]::MinValue)
0x100000000 (1 more than can fit into an [int] (or [uint32])) yields a [long]
0x10000000000000000 (1 more than can fit into a [long] (or [uint64])) breaks, because [long] is the largest type supported ("the numeric constant is not valid").
To ensure that a hexadecimal literal results in a positive number:
Windows PowerShell: Use type suffix L to force interpretation as a [long] first, and then (optionally) cast to an unsigned type; e.g. [uint32] 0x80000000L yields 2147483648, but note that this technique only works up to 0x7fffffffffffffff, i.e., [long]::maxvalue; as suggested above, use a conversion from a string as a workaround (e.g., [uint64] '0xffffffffffffffff').
PowerShell [Core] 6.2+: Use type suffix us, u, or ul, as needed; e.g.: 0x8000us -> 32768 ([uint16]), 0x80000000u -> 2147483648 ([uint32]), 0x8000000000000000ul -> 9223372036854775808 ([uint64])
Binary integer literals (PowerShell [Core] 7.0+) are interpreted the same way as hexadecimal ones; e.g., 0b10000000000000000000000000000000 == 0x80000000 == -2147483648 ([int])
Floating-point or exponential notation literals (which are only recognized in decimal representation) are always interpreted as a [double], no matter how small:
1.0 and 1e0 both yield a [double]
Your problem is that the ToUint64 doesn't understand the Powershell syntax. You could get around it by doing:
($MemoryFromString / 1GB) * 1GB
As the $MemoryFromString will be converted its numeric value before the division.
This works because at the point of division Powershell attempts to convert the string to a number using its rules, rather than the .Net rules that are baked into ToUInt64. As part of the conversion if spots the GB suffix and applies it rules to expand the "1GB" string to 1073741824
EDIT: Or as PetSerAl pointed out, you can just do:
($MemoryFromString / 1)

Insert ASCII in LSB

Does anyone know a efficient method in order to insert the ASCII value of some characters in the 8 least significant bits (LSB) of a 16 bit number?
The only idea that comes up in my mind is to convert both numbers to binary, then replace the last 8 characters, from 16 bit number, by the ASCII value in 8 bits. But as far as I know string operations are very expensive in computational time.
Thanks
I don't know Matlab syntax, but in C, it would be something like this:
short x; // a 16-bit integer in many implementations
... do whatever you need to to x ...
char a = 'a'; // some character
x = (x & 0xFF00) | (short)(a & 0x00FF);
The & operator is the arithmetic "and" operator. The | operator is the arithmetic "or" operator. Numbers beginning with 0x are in hexadecimal for easy readability.
Here is a MATLAB implementation of #user1118321 idea:
%# 16-bit integer number
x = uint16(30000);
%# character
c = 'a';
%# replace lower 8-bit
y = bitand(x,hex2dec('FF00'),class(x)) + cast(c-0,class(x))

format specifier for long double (I want to truncate the 0's after decimal)

I have a 15-digit floating-point number and I need to truncate the trailing zeros after the decimal point. Is there a format specifier for that?
%Lg is probably what you want: see http://developer.apple.com/library/ios/#DOCUMENTATION/System/Conceptual/ManPages_iPhoneOS/man3/printf.3.html.
Unfortunately in C there is no format specifier that seems to meet all the requirements you have. %Lg is the closest but as you noted it switched to scientific notation at its discretion. %Lf won't work by itself because it won't remove the trailing zeroes.
What you're going to have to do is print the fixed format number to a buffer and then manually remove the zeroes with string editing (which can STILL be tricky if you have rounding errors and numbers like 123.100000009781).
Is this what you want:
#include <iostream>
#include <iomanip>
int main()
{
double doubleValue = 78998.9878000000000;
std::cout << std::setprecision(15) << doubleValue << std::endl;
}
Output:
78998.9878
Note that trailing zeros after the decimal point are truncated!
Online Demo : http://www.ideone.com/vRFlQ
You could print the format specifier as a string, filling in the appropriate amount of digits if you can determine how many:
sprintf(fmt, "%%.%dlf", digits);
printf(fmt, number);
or, just checking trailing 0 characters:
sprintf(fmt, "%.15lf", 2.123);
truncate(fmt);
printf("%s", fmt);
truncate(char * fmt) {
int i = strlen(fmt);
while (fmt[--i] == '0' && i != 0);
fmt[i+1] = '\0';
}
%.15g — the 15 being the maximum number of significant digits required in the string (not the number of decimal places)
1.012345678900000 => 1.0123456789
12.012345678900000 => 12.0123456789
123.012345678900000 => 123.0123456789
1234.012345678900000 => 1234.0123456789
12345.012345678900000 => 12345.0123456789
123456.012345678900000 => 123456.012345679

Objective-C Decimal to Base 16 Hex conversion

Does anyone have a code snippet or a class that will take a long long and turn it into a 16 byte Hex string?
I'm looking to turn data like this
long long decimalRepresentation = 1719886131591410351;
and turn it into this
//Base 16 Hex Output: 17DE435307A07300
The %x operator doesn't want to work for me
NSLog(#"Hex: %x",decimalRepresentation);
//console : "Hex: 7a072af"
As you can see that's not even close. Any help is truly appreciated!
%x prints an unsigned integer in hexadecimal representation and sizeof(long long) != sizeof(unsigned). See e.g. "Data Type Size and Alignment" in the 64bit transitioning guide.
Use the ll specifier (thats two lower-case L) to get the desired output:
NSLog(#"%llx", myLongLong);