What is CGFloat.leastNonzeroMagnitude equivalent in Obj-C - swift

What is the equivalent of CGFloat.leastNonzeroMagnitude equivalent in Objective-C?
I googled but could not find any answer.

Note that the documentation of leastNonzeroMagnitude says:
Compares less than or equal to all positive numbers, but greater than zero. If the target supports subnormal values, this is smaller than leastNormalMagnitude; otherwise they are equal.
So the value of this also depends on "if the target supports subnormal values". Looking at the implementation, we can see:
public static var leastNonzeroMagnitude: ${Self} {
#if arch(arm)
// On 32b arm, the default FPCR has subnormals flushed to zero.
return leastNormalMagnitude
#else
return leastNormalMagnitude * ulpOfOne
#endif
}
It turns out that ARM is the target that doesn't "support subnormal values". :)
If you translate the two branches into Objective-C separately, it would be:
CGFLOAT_MIN
and
CGFLOAT_MIN * CGFLOAT_EPSILON

Apple doesn't provide an Objective-C constant equivalent to CGFloat.leastNonzeroMagnitude.
Apple does provide CGFLOAT_MIN, which is the smallest positive normal nonzero value. It's equivalent to CGFloat.leastNormalMagnitude. This is 2.225073858507201e-308 on systems with 64-bit CGFloat.
There are constants (specified by the C11 standard) for the smallest positive non-zero float and double, including subnormals: FLT_TRUE_MIN is 1.401298e-45 and DBL_TRUE_MIN is 4.940656458412465e-324.
Apple provides a constant CGFLOAT_IS_DOUBLE which is 1 if CGFloat is double and zero if it is float. So you can check that to define your own CGFLOAT_TRUE_MIN. You should also guard against Apple adding its own definition in the future.
#ifndef CGFLOAT_TRUE_MIN
#if CGFLOAT_IS_DOUBLE
#define CGFLOAT_TRUE_MIN DBL_TRUE_MIN
#else
#define CGFLOAT_TRUE_MIN FLT_TRUE_MIN
#endif
#endif
But the only supported platform that still uses 32-bit CGFloat is watchOS, and its unlikely that you're targeting watchOS if you're using Objective-C. If you are only targeting 64-bit versions of iOS/macOS/tvOS, you can simplify the definition:
#ifndef CGFLOAT_TRUE_MIN
#define CGFLOAT_TRUE_MIN DBL_TRUE_MIN
#endif

Related

Why Swift's malloc/MemoryLayout.size take/return signed integers?

public func malloc(_ __size: Int) -> UnsafeMutableRawPointer!
#frozen public enum MemoryLayout<T> {
public static func size(ofValue value: T) -> Int
...
When in C malloc/sizeof take/return size_t which is unsigned?
Isn't Swift calling libc under the hood?
EDIT: is this the reason why? https://qr.ae/pvFOQ6
They are basically trying to get away from C's legacy?
Yes, it's calling the libc functions under the hood.
The StdlibRationales.rst document in the Swift repo explains why it imports size_t as Int:
Converging APIs to use Int as the default integer type allows users to write fewer explicit type conversions.
Importing size_t as a signed Int type would not be a problem for 64-bit platforms. The only concern is about 32-bit platforms, and only about operating on array-like data structures that span more than half of the address space. Even today, in 2015, there are enough 32-bit platforms that are still interesting, and x32 ABIs for 64-bit CPUs are also important. We agree that 32-bit platforms are important, but the usecase for an unsigned size_t on 32-bit platforms is pretty marginal, and for code that nevertheless needs to do that there is always the option of doing a bitcast to UInt or using C.

What is the correct type for returning a C99 `bool` to Rust via the FFI?

A colleague and I have been scratching our heads over how to return a bool from <stdbool.h> (a.k.a. _Bool) back to Rust via the FFI.
We have our C99 code we want to use from Rust:
bool
myfunc(void) {
...
}
We let Rust know about myfunc using an extern C block:
extern "C" {
fn myfunc() -> T;
}
What concrete type should T be?
Rust doesn't have a c_bool in the libc crate, and if you search the internet, you will find various GitHub issues and RFCs where people discuss this, but don't really come to any consensus as to what is both correct and portable:
https://github.com/rust-lang/rfcs/issues/1982#issuecomment-297534238
https://github.com/rust-lang/rust/issues/14608
https://github.com/rust-lang/rfcs/issues/992
https://github.com/rust-lang/rust/pull/46156
As far as I can gather:
The size of a bool in C99 is undefined other than the fact it must be at least large enough to store true (1) and false (0). In other words, at least one bit long.
It could even be one bit wide.
Its size might be ABI defined.
This comment suggests that if a C99 bool is passed into a function as a parameter or out of a function as the return value, and the bool is smaller than a C int then it is promoted to the same size as an int. Under this scenario, we can tell Rust T is u32.
All right, but what if (for some reason) a C99 bool is 64 bits wide? Is u32 still safe? Perhaps under this scenario we truncate the 4 most significant bytes, which would be fine, since the 4 least significant bytes are more than enough to represent true and false.
Is my reasoning correct? Until Rust gets a libc::c_bool, what would you use for T and why is it safe and portable for all possible sizes of a C99 bool (>=1 bit)?
As of 2018-02-01, the size of Rust's bool is officially the same as C's _Bool.
This means that bool is the correct type to use in FFI.
The rest of this answer applies to versions of Rust before the official decision was made
Until Rust gets a libc::c_bool, what would you use for T and why is it safe and portable for all possible sizes of a C99 bool (>=1 bit)?
As you've already linked to, the official answer is still "to be determined". That means that the only possibility that is guaranteed to be correct is: nothing.
That's right, as sad as it may be. The only truly safe thing would be to convert your bool to a known, fixed-size integral type, such as u8, for the purpose of FFI. That means you need to marshal it on both sides.
Practically, I'd keep using bool in my FFI code. As people have pointed out, it magically lines up on all the platforms that are in wide use at the moment. If the language decides to make bool FFI compatible, you are good to go. If they decide something else, I'd be highly surprised if they didn't introduce a lint to allow us to catch the errors quickly.
See also:
Is bool guaranteed to be 1 byte?
After a lot of thought, I'm going to try answering my own question. Please comment if you can find a hole in the following reasoning.
This is not the correct answer -- see the comments below
I think a Rust u8 is always safe for T.
We know that a C99 bool is an integer large enough to store 0 or 1, which means it's free to be an unsigned integer of at least 1-bit, or (if you are feeling weird) a signed integer of at least 2-bits.
Let's break it down by case:
If the C99 bool is 8-bits then a Rust u8 is perfect. Even in the signed case, the top bit will be a zero since representing 0 and 1 never requires a negative power of two.
If the C99 bool is larger than a Rust u8, then by "casting it down" to a 8-bit size, we only ever discard leading zeros. Thus this is safe too.
Now consider the case where the C99 bool is smaller than the Rust u8. When returning a value from a C function, it's not possible to return a value of size less than one byte due to the underlying calling convention. The CC will require return value to be loaded into a register or into a location on the stack. Since the smallest register or memory location is one byte, the return value will need to be extended (with zeros) to at least a one byte sized value (and I believe the same is true of function arguments, which too must adhere to calling convention). If the value is extended to a one-byte value, then it's the same as case 1. If the value is extended to a larger size, then it's the same as case 2.

Query about a certain programming trick used in an open source software

In a certain library (FFTW: discrete Fourier transform computation),
I came across a header file which contains the following comment and some #defines following that. The comment talks about some programming trick.
But I'm not able to understand what exactly this programming trick is.
Could someone please explain ?
/* hackery to prevent the compiler from ``optimizing'' induction
variables in codelet loops. The problem is that for each K and for
each expression of the form P[I + STRIDE * K] in a loop, most
compilers will try to lift an induction variable PK := &P[I + STRIDE * K].
For large values of K this behavior overflows the
register set, which is likely worse than doing the index computation
in the first place.
If we guess that there are more than
ESTIMATED_AVAILABLE_INDEX_REGISTERS such pointers, we deliberately confuse
the compiler by setting STRIDE ^= ZERO, where ZERO is a value guaranteed to
be 0, but the compiler does not know this.
16 registers ought to be enough for anybody, or so the amd64 and ARM ISA's
seem to imply.
*/
#define ESTIMATED_AVAILABLE_INDEX_REGISTERS 16
#define MAKE_VOLATILE_STRIDE(nptr, x) \
(nptr <= ESTIMATED_AVAILABLE_INDEX_REGISTERS ? \
0 : \
((x) = (x) ^ X(an_INT_guaranteed_to_be_zero)))
#endif /* PRECOMPUTE_ARRAY_INDICES */
The optimization: Instead of recalculating the index of the array every time an iteration in the loop occurs, some compilers anticipate the next addresses and place these in registers because the indexing expression is predictable.
The problem: Some indexing expressions (like I + STRIDE * K) may result in using a lot of registers this way, and if this number exceeds the total amount of registers, some register values will be pushed to stack memory, including other variables that the loop might be using.
The trick: In order to force a compiler to not use this optimization, an external integer is used. Adding or XOR'ing this zero and storing it in x is a no-op that "taints" the stride, and consequently the index expression, making it unpredictable by the optimization analysis. It can no longer infer how this variable behaves even though we know it to behave very zero-like. A relevant extract of the file ifftw.h from which this is derived:
extern const INT X(an_INT_guaranteed_to_be_zero);
#ifdef PRECOMPUTE_ARRAY_INDICES
...
#define MAKE_VOLATILE_STRIDE(nptr, x) (x) = (x) + X(an_INT_guaranteed_to_be_zero)
#else
...
#define ESTIMATED_AVAILABLE_INDEX_REGISTERS 16
#define MAKE_VOLATILE_STRIDE(nptr, x) \
(nptr <= ESTIMATED_AVAILABLE_INDEX_REGISTERS ? \
0 : \
((x) = (x) ^ X(an_INT_guaranteed_to_be_zero)))
#endif /* PRECOMPUTE_ARRAY_INDICES */
This optimization is either attempted avoided completely, or allowed on the condition that the indices can fit into a guess at the number of available registers. The way it allows the optimization is by using a constant zero.
Some etymology: The macro MAKE_VOLATILE_STRIDE derives its name from the volatile keyword which indicates that a value may change between different accesses, even if it does not appear to be modified. This keyword prevents an optimizing compiler from optimizing away subsequent reads or writes and thus incorrectly reusing a stale value or omitting writes. (Wikipedia)
Why the volatile keyword, rather than XOR'ing an external value, is not sufficient, I don't know.

How should I enable cl_khr_fp64 in OpenCL?

I'm trying to get double precision to work in my OpenCL kernel but I'm having problems enabling cl_khr_fp64. If I put #pragma OPENCL EXTENSION cl_khr_fp64 : enable at the top of my kernel file and define a variable double u = 5.0; then it defines it and allows me to +-*/ on u. But if I try to do any math functions, for example double u = exp(5.0); it throws an error that it can't find the overloaded exp function for type double. Something weird I found is that if I check if cl_khr_fp64 is defined via
#ifdef cl_khr_fp64
#pragma OPENCL EXTENSION cl_khr_fp64 : enable
#elif defined(cl_amd_fp64)
#pragma OPENCL EXTENSION cl_amd_fp64 : enable
#else
#error "Double precision floating point not supported by OpenCL implementation."
#endif
Then it throws the error that double precision isn't supported. If I just say to enable it then it gets enabled, but if I check to see if it is able to be enabled, then it says it can't.
I've checked the extensions on my card and cl_khr_fp64 is listed and I also checked the CL_DEVICE_DOUBLE_FP_CONFIG using clGetDeviceInfo and it returns 63. I'm using a MacPro on Yosemite with the AMD FirePro D700. I'm wondering if I enabled cl_khr_fp64 in the wrong place or something. The contents of my mykernel.cl file are below. It's just a modification of the Apple 'hello_world' OpenCL Xcode project. The code, as written works just fine, but if I change the line from double u = (5.0); to double u = exp(5.0); it doesn't work. Ultimately I want to use math functions on double variables. Any help would be greatly appreciated!
#pragma OPENCL EXTENSION cl_khr_fp64 : enable
__kernel void square5(global double* input, global double* output, double mul,int nv)
{
size_t i = get_global_id(0);
double u = (5.0);
float left = u/1.2;
if(i==0) {
output[i] = mul*pow((float)u,left)*input[i]*input[i];
} else if (i==nv-1) {
output[i] = mul*u*input[i]*input[i];
} else {
output[i] = 0.25*mul*u*(input[i-1] + input[i+1])*(input[i-1] + input[i+1]);
}
}
Double precision was made a core-optional feature in OpenCL 1.2 (which should be the version that your device supports under OS X). This means that you shouldn't need to enable the extension in order to use it, if it is supported by the device. Enabling the extension shouldn't have any negative effect however.
You are not doing anything wrong, so this is likely a bug in Apple's OpenCL implementation. The same code (with the exp() function) compiles fine on my Macbook for the devices that support double precision. So, if your device definitely reports that it supports double precision, then you should file a bug in Apple's Bug Reporting System.

Macro without definition in C

What is the use/applicability of macro function without definition:
#ifndef __SYSCALL
#define __SYSCALL(a, b)
#endif
One can find this macro in Linux system in header file /usr/include/asm/msr.h
I also notice macro of following kind.
#define _M(x) x
And only reason to defined this kind of macro that I can think to make code uniform. like in #define SOMETHING (1 << 0). Is there any other hidden(better) use of this kind of macros?
An answer with example will be very helpful. Also
can someone provide me a text/link to read about this.
One of the most common case of a macro of this form:
#define _M(x) x
is to provide backwards compatibility for compilers that only supported the original K&R dialect of C, that predated the now-ubiquitous ANSI C dialect. In the original K&R dialect of the language, function arguments were not specified when declaring the function. In 1989, ANSI standardized the language and incorporated a number of improvements, including function prototypes that declared the number of type of arguments.
int f(int x, double y); /* ANSI C. K&R compilers would not accept this */
int f(); /* Function declared in the original K&R dialect */
While compilers that support the original K&R dialect of C are rare (or extinct) these days, a lot of software was written when both kinds of compilers needed to be supported, and macros provided an easy way to support both. There are still a lot of headers laying about that provide this backwards compatibility.
To provide backwards compatibility for K&R compilers, many header files have the following:
#if ANSI_PROTOTYPES
# define _P(x) x
#else
# define _P(x) ()
#endif
...
int f _P((int x, double y));
If the ANSI_PROTOTYPES definition has been correctly set (either by the user or by some prior #ifdef logic), then you get the desired behavior:
If ANSI_PROTOTYPES is defined, the definition expands to int f(int x, double y).
If ANSI_PROTOTYPES is not defined, the definition expands to int f()
This is often used with conditional expressions to disable a macro by causing it to be preprocessed to nothing. For example (simplified):
#ifdef DEBUG
#define ASSERT(x) if(!(x)) { abort(); }
#else
#define ASSERT(x) /* nothing */
#endif
Just a follow-up to my question.
I got good answers. but I am also adding some more helpful example where macros without definition are useful, one can find it helpful in future:
(1): Why do I see THROW in a C library?
uses to share header file between C and C++. The macro name is _THROW(x)
#ifdef __cplusplus
#define __THROW(x) throw(x)
#else
#define __THROW(x)
#endif
(2) to eliminate warnings when a function parameter isn't used:
This use is for c++. In C it will cause an error too few arguments But in C++ it works with no error: (codepad linked)
#define UNUSED(x)
int value = 0;
int foo(int UNUSED(value))
{
return 42;
}
int main(){
foo(value);
}
(for this I added c++ tag in my question)
Additionally,
(3): The use of #define _M(x) x is as follows, just to makes code line up uniformly:
/* Signed. */
# define INT8_C(c) c
# define INT16_C(c) c
# define INT32_C(c) c
# if __WORDSIZE == 64
# define INT64_C(c) c ## L
# else
# define INT64_C(c) c ## LL
# endif
the file is: /usr/include/stdint.h
It means that code that uses that macro will conditionally preprocess away to nothing.
As simple examples, consider debug code, logging or assertions.
This is probably a debug macro or an platform macro. For example lets say I have a debugger attached to INT3. I might have this when I'm debugging
#define debug() INT3()
Then to be safe I'll add this to production code (to make sure I took them all out)
#define debug()
This looks like something similar
It could be that in some cases on some systems this code needs to make a call -- for example on a certain CPU architecture or OS. But on your system it is just no-oped.