Floating point invalid register access - sparc

While writing assembly code for SPARC V8 I get invalid fp register access.
I just want to know that why while writing double precision instructions operation only even numbered register should be used what is the problem giving odd-numbered registers?
Why we get invalid floating point register access error.

Invalid floating point register access error can come: SPARC Floating Point Instructions
If you did not include -mhard-float flag while compiling your code.
If you are using double precision floats then it has to even-numbered (SPARC V8) is a 32bit processor.

Related

Why NumberLong(9007199254740993) matches NumberLong(9007199254740992) in MongoDB from mongo shell?

This situation happens when the given number is big enough (greater than 9007199254740992), along with more tests, I even found many adjacent numbers could match a single number.
Not only NumberLong(9007199254740996) would match NumberLong("9007199254740996"), but also NumberLong(9007199254740995) and NumberLong(9007199254740997).
When I want to act upon a record using its number, I could actually use three different adjacent numbers to get back the same record.
The accepted answer from here makes sense, I quote the most relevant part below:
Caveat: Don't try to invoke the constructor with a too large number, i.e. don't try db.foo.insert({"t" : NumberLong(1234657890132456789)}); Since that number is way too large for a double, it will cause roundoff errors. Above number would be converted to NumberLong("1234657890132456704"), which is wrong, obviously.
Here are some supplements to make things more clear:
Firstly, Mongo shell is a JavaScript shell. And JS does not distinguish between integer and floating-point values. All numbers in JS are represented as floating point values. This means mongo shell uses 64 bit floating point number by default. If shell sees "9007199254740995", it will treat this as a string and convert it to long long. But when we omit the double quotes, mongo shell will see unquoted 9007199254740995 and treat it as a floating-point number.
Secondly, JS uses the 64 bit floating-point format defined in IEEE 754 standard to represent numbers, the maximum it can represent is:
, and the minimum is:
There are an infinite number of real numbers, but only a limited number of real numbers can be accurately represented in the JS floating point format. This means that when you deal with real numbers in JS, the representation of the numbers will usually be an approximation of the actual numbers.
This brings the so-called rounding error issue. Because integers are also represented in binary floating-point format, the reason for the loss of trailing digits precision is actually the same as that of decimals.
The JS number format allows you to accurately represent all integers between
and
Here, since the numbers are bigger than 9007199254740992, the rounding error certainly occurs. The binary representation of NumberLong(9007199254740995), NumberLong(9007199254740996) and NumberLong(9007199254740997) are the same. So when we query with these three numbers in this way, we are practically asking for the same thing. As a result, we will get back the same record.
I think understanding that this problem is not specific to JS is important: it affects any programming language that uses binary floating point numbers.
You are misusing the NumberLong constructor.
The correct usage is to give it a string argument, as stated in the relevant documentation.
NumberLong("2090845886852")

Numeric Precision Rounding Artifacts with PostgreSQL

We have an application that is attempting a bulk insert into a table within a postgresql (9.1 I think, I don't have access helping coworker troubleshoot remotely). A trace shows the raw values are generated correctly and handed off to the ODBC correctly.
The problem comes in with a column defined as NUMERIC but did not have scale or precision defined. There seem to be 'random' rounding artifacts. Sometimes rounds up, sometimes down with no relationship to number of decimal places. This is seen when values from the bulk insert are then queried.
I know it can cause issues with strings but not sure if it matters with numeric data types. The database is windows 1252 encoded and they are using the Unicode postgresql driver. Finally just some FYI its on a 32 bit windows VM with what looks like the default config_file parameters.
Question is what would/could be the cause of this?
Thanks in advance.
The data type numeric is an arbitrary precision data type, not a floating point type like float8 (double precision) or float4 (real). I.e., it stores decimal digits that are handed to it without any rounding whatsoever. Numbers are reproduced identically. Only the exact format may depend on your locale or settings of the middleware and client.
The fact that the precision and scale were not set lets the numeric column do that with almost1 no limitation.
1 Per documentaion:
up to 131072 digits before the decimal point; up to 16383 digits after the decimal point.
The gist of it: you can rule out the Postgres data type numeric as source for the rounding effect. I don't know about the rest, especially since you did not provide exact version numbers, demo values or a reproducible test case.
My shot in the dark would be that ODBC might treat the number like a floating point type. Maybe an outdated version?

what is the meaning of semantic density per instruction

Can any one tell me the meaning of semantic density per instruction and how Register based architecture increases semantic density per instruction ?
I've never heard the term used before, but I would assume it refers to the complexity and amount of work done per instruction. Instructions like those from the AES-NI extensions found in x86-based architectures will do a lot of things internally when executed. Compare that with a classic RISC instruction that can perform an integer add with two registers with operand and one register for the output. It's doing very little internally when executed.
I'm not sure why register architectures would specifically increase the "semantic density". I suppose it's possible encode the operands with fewer bits, however the flipside is that you must use more instructions to get your operands into the registers in the first place. Stack architectures require more instructions to achieve similar behaviour to register architectures, but those instruction can be encoded with less space. I guess if you ignore the instruction size then a register architecture can do more work per instruction, but this isn't a very meaningful metric...

Double vs float on the iPhone

I have just heard that the iphone cannot do double natively thereby making them much slower that regular float.
Is this true? Evidence?
I am very interested in the issue because my program needs high precision calculations, and I will have to compromise on speed.
The iPhone can do both single and double precision arithmetic in hardware. On the 1176 (original iPhone and iPhone3G), they operate at approximately the same speed, though you can fit more single-precision data in the caches. On the Cortex-A8 (iPhone3GS, iPhone4 and iPad), single-precision arithmetic is done on the NEON unit instead of VFP, and is substantially faster.
Make sure to turn off thumb mode in your compile settings for armv6 if you are doing intensive floating-point computation.
This slide show gives insight in why there isn't good floating point and why there is (the vector floating point unit). Apparently, it is important you check the "thumb mode" which influences whether or not floating point support is on. This is not always an improvement. It shows how to find the right instructions in the assembly code.
It also depends on what version of the phone you want to run your code. The most recent one seems "more capable" in doing floating point math.
EDIT: here's an interesting read on floating point optimizations on the ARM with VFP and NEON SSE.
The ARM1176JZF-S manual says that it supports double precision floating point numbers. You should be in good shape. Here's a link to the PDF documentation. Later iPhones are Cortex chips, and certainly shouldn't be less capable.

Does scientific notation affect Perl's precision?

I encountered a weird behaviour in Perl. The following subtraction should yield zero as result (which it does in Python):
print 7.6178E-01 - 0.76178
-1.11022302462516e-16
Why does it occur and how to avoid it?
P.S. Effect appears on "v5.10.0 built for x86_64-linux-gnu-thread-multi" (Ubuntu 9.04) and "v5.8.9 built for darwin-2level" (Mac OS 10.6)
It's not that scientific notation affects the precision so much as the limitations of floating point notation represented in binary. See the answers to the perlfaq4. This is a problem for any language that relies on the underlying architecture for number storage.
Why am I getting long decimals (eg, 19.9499999999999) instead of the numbers I should be getting (eg, 19.95)?
Why is int() broken?
If you need better number handling, check out the bignum pragma.