Some of the values I need to read in my ksy file are double's which I assume is a binary64 structure. The native data-types for a float won't stretch that far. Has anyone managed to represent this datatype in Kaitai ?
"binary64" is a normal IEEE 754 double-precision floats, occupying 64 bits = 8 bytes.
They're perfectly supported by vast majority of languages and, subsequently, Kaitai Struct offers built-in supports for them as type: f8 (float, 8 bytes long).
If you're rather interested in larger floating point values (binary128, binary256 β i.e. quad or octuple precision), there is no built-in support for them in KS due to lack of standard support for these types in most target languages. If you want something like that, the recommended way would be implementing one as opaque type in a target language of your choice. That will likely require you to bringing in some external library which implements this type using some kind of software emulation / complex arithmetics β as hardware support seems to be almost non-existent in commodity CPUs (like Intel or ARM) as of 2020.
For more details on these, see issue #101.
Related
I've been wondering.. It's called SIMD as in single instruction multiple data. So why does it have single data instructions?
For example, vaddss is the single data equivalent of the multiple data vaddps. Just about every SIMD instruction have a single data version.
Why?
Why does SIMD have single data instructions when it's called SIMD?
It isn't a SIMD instruction in that sense
vaddss is a scalar FP math instruction that operates on data in the FP/SIMD registers (XMM0..15). It exists because x87 is not a very convenient compiler target with its stack-based registers that often need fxch, and other quirks. Intel added a new way to do scalar FP math along with SSE1 (float) and SSE2 (double), which is fortunately baseline for x86-64 so everyone can just use it.
People who call that a SIMD instruction are talking about one of:
Which registers it operates on. (XMM0 is 16 bytes wide and clearly a SIMD register, even when you only care about the low element holding a scalar value.)
The fact that it's an AVX instruction, so it was introduced with an ISA extension that was primarily aimed at SIMD usage, and thus is called a SIMD extension or instruction set.
Which also means it uses the MXCSR for rounding mode and FP exception recording / unmasking, and the kinds of exceptions it can take are the same as other SSE/AVX instructions which Intel documents as "SIMD Floating-Point Exceptions" as concise terminology to distinguish it from legacy x87.
Or they're talking about the use-case of doing something to just the low element when the high elements have actual data. (Quite rare, but something you could do. Maybe more likely with sd scalar double, where the low double is one half of an XMM register.)
Or they're just plain wrong if they actually mean it in terms of Flynn's taxonomy of SISD vs. SIMD vs. MIMD etc. I highly doubt anyone would actually mean that, though. The ss and sd scalar FP math instructions are SISD, single-instruction single-data. And BTW, they only exist for FP math; x86 already has instructions like add eax, ecx for scalar integer math, and doesn't have scalar versions of paddb or even xorps.
One reason for having separate scalar FP math instructions is that using addps would also operate on whatever garbage might be in the high elements of XMM registers. This can raise extra FP exceptions (usually masked, so only recorded in MXCSR (fenv.h), but if unmasked would trap to the OS.)
With the upper elements all 0.0 (which isn't required by the calling convention, BTW), addps wouldn't raise any extra exceptions, but divps would divide by zero.
With non-zero garbage like small integers, it might be a bit-pattern for a subnormal float, or a result might be subnormal, causing huge slowdowns (factor of ~100) as the CPU takes a microcode assist to get handle subnormal input or output in many cases (or when SSE1 was new in Pentium III, probably all cases of subnormals). Unless you set FTZ and DAZ (flush to zero, denormal are zero) like gcc -ffast-math does.
For instructions like xorps or paddq which don't do actual FP math, no FP exceptions or microcode assists are possible. You can just use them even if you only care about the low 32 or 64 bits of an XMM.
MMX or SSE2 had occasional uses in 32-bit code for doing scalar 64-bit integer math, with zeros or garbage in the upper bytes. MMX paddq mm0, mm1 is a SISD instruction, but SSE2 paddq xmm0, xmm1 is a SIMD instruction.
SSE1 was new in Pentium 3, where the SIMD execution units and registers were only 64 bits wide. addps decoded to 2 uops; addss decoded to 1. So there was a performance motivation, too, even in the best case.
This is also likely the reason for Intel's unfortunate design where sqrtss and cvtsi2ss and others merge into the destination, requiring either spending extra front-end bandwidth on xor-zeroing, or risking false dependencies: Why does adding an xorps instruction make this function using cvtsi2ss and addss ~5x faster? . It's a short-sighted design decision to make them single-uop on Pentium 3, which they unfortunately followed in SSE2 for double precision, and stuck to for AVX and AVX-512 when they had a chance to introduce better versions with different semantics. At least the AVX versions take a 2nd source register to merge with, so you can pick a "cold" reg as a workaround, see my answer on the linked duplicate.
It's normal for scalar FP to share registers with SIMD
It isn't necessary or useful to have yet another set of registers for scalar FP, and sharing with the x87 FPU or the general-purpose integer registers would each be worse for separate reasons.
It's totally normal on other ISAs for the SIMD registers to overlap or be the same as the scalar FP registers; Some ISAs (like ARM) that didn't have weirdo designs like x87 didn't need new architectural state to introduce SIMD. e.g. ARM's NEON q0..q15 16-byte registers map to pairs of d0..d31 double-precision FP registers that existed with VFPv3.
(I'm not sure if the partial-register aliasing was actually common in SIMD extensions for other ISAs, though. Probably some introduced new architectural state, or just used FP double-precision registers as 64-bit integer SIMD instead of 128-bit.)
In an OS kernel you often talk about saving "FPU state" on context switch (as opposed to just the general-purpose integer registers), and these days that's short-hand for FPU and SIMD state. e.g. in the Linux kernel, you need to use kernel_fpu_begin() before running instructions that use XMM/YMM/ZMM registers. (e.g. in the RAID5 / RAID6 drivers).
According to the docs Postgres double precision type implements IEEE 754 : βThe data types real and double precision are inexact, variable-precision numeric types. On all currently supported platforms, these types are implementations of IEEE Standard 754 for Binary Floating-Point Arithmetic (single and double precision, respectively), to the extent that the underlying processor, operating system, and compiler support it.β
My question is then how do I check that "the underlying processor, operating system, and compiler support it"? Is it somewhat common that this is not the case?
Clarification
I want to check if my specific Postgres instance is compliant. Is there some kind of test(s) that I can do running SQL queries in order to verify or disprove that?
It is very common for today's computers to use IEEE 754 floating point numbers. So much so that even in the wide range of platforms supported by PostgreSQL, all do. The only way to check is to look at the specification and documentation of the hardware or software in question.
PostgreSQL uses whatever the compiler and machine provide for the C data type double. It does not implement floating point arithmetic itself.
The Swift library includes the function bigEndian that can be used on integer types (such as Int, UInt, UInt8, UInt64, Int64, etc) to convert them from host order (which might presumably be anything, but realistically will be big or little endian) to network byte order (which is big endian). There're some good SO answers referring to this, and a particularly complete one is here.
However, I've not found a good resource that covers arranging a Float (32 bit) or Double (64 bit) type in to network byte order. Given that these types don't have a bigEndian method, I'm wondering if there is some subtlety involved? (The linked question does discuss floating point types, but I'm not sure it is definitely covering all details that might be relevant).
Specifically, I want to handle the 64 bit Double floating point type. I'd like a solution that will work on any platform where Swift is available.
Thank you.
I have learnt that word-length is an ISA feature, which has to be implemented in hardware and software both. I have a vague idea only about the answer. I need correction or confirmation. Does the word-length becomes size of the general purpose register in the CPU? Does the word-length become the size of the 'int'(just plain int, not long or short) for a compiler?
The word length is the number of bits natively handled by the system. Common versions right now are 32-bit words and 64-bit words.
For example, a byte can hold a number from 0-255. However, a 32-bit integer is from 0-4,294,967,295. An integer is the native "word size" of the system, so is 4-bytes wide in 32-bit systems and therefore is considerably larger than 0-255.
In fact, in many systems/compilers/etc. types which are smaller than a system's native word size are converted to that word size simply because it's more efficient than trying to put multiple values into a single word. A boolean, for example, can be represented by a single bit. However, if you write a piece of software that uses 32 boolean values, it's not going to squeeze them all into a single word. Each will be assigned its own word when it runs on the metal.
I am taking liberty and interpreting this question as size of integer on a computer in C or C++. In that case this link will help - Does the size of an int depend on the compiler and/or processor?.
However if read it literally then size of word of CPU should be size of its register.
Hardware implementation : Word-length is the number of bytes fetched by the CPU at a time and can also be called the natural size of the machine. though there is nothing natural about the computers. it also becomes size of the CPU's register in implementation, since it needs registers to store what it fetches. Having said that, it is possible to use a bigger register for storing purpose. IA-32 softwares (with word length 32bits) can run on x86-64 (with word length 64 bits). Software implementation: word-length becomes the size of 'int' (just plain int, not long,short)
Does anyone have any experience with using the 16-bit floating-point type in an application. This relatively new data type is used in computer graphics. It's defined by several specs: OpenEXR, DirectX and the new IEEE-754 2008 standard.
At WinHEC 2008 Microsoft's Chas Boyd had a presentation evangelizing this data type. (I wasn't there, but I saw the slide deck.) "float-16 is the new byte".
My questions are: is anyone using this data type for anything outside of DirectX textures?
If so, why? What is you application doing?
If so, do you require full IEEE support, including denomals, NaNs and #Inf?
I have encountered this type in the DSP libraries for uclinux, including a software implementation of all major operations. It is very nice for 8 and 16 bit processors, far easier to handle in software than 32 bit or larger types.