This question already has answers here:
Using ymm registers as a "memory-like" storage location
(2 answers)
Efficient 128-bit addition using carry flag
(2 answers)
How does Rust's 128-bit integer `i128` work on a 64-bit system?
(4 answers)
Can long integer routines benefit from SSE?
(1 answer)
Closed 1 year ago.
Since has x86_64 has 14 general-purpose registers (rsp and rbp are for stack), could you make a 896 bit integer?
Or could you even use all 32 zmm registers (512-bit) to make a 16384-bit integer, by moving parts of it into the 64-bit registers and then calculating? I don't care if gmp is faster for this question.
Related
This question already has answers here:
Dart rounding errors
(2 answers)
Is floating point math broken?
(31 answers)
Closed 8 months ago.
I couldn't resolve the following quirk when doing simple addition in Dart.
void main(){
print(5.9+0.7);
}
The output is:
6.6000000000000005
But when I use some different numbers ;
void main(){
print(5.9+0.6);
print(5.9+0.7);
print(5.9+0.8);
print(5.9+0.9);
}
The output is :
6.5
6.6000000000000005
6.7
6.800000000000001
And here is a different example :
void main(){
print(25.90+20.70+4);
}
The output is :
50.599999999999994
What may be the reason of this strangeness.
Is there ay solution suggestion to solve this problem?
I'd recommend you read this article: What Every Computer Scientist Should Know About Floating-Point Arithmetic
From this article, Rounding Error - Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.
This question already has answers here:
Sending data to workers
(3 answers)
Saving time and memory using parfor?
(2 answers)
Closed 4 years ago.
I'm new to parallel processing, here's my problem:
I have a big data variable that cannot fit twice in RAM. Therefore, this won't work:
for ind=1:4
data{ind}=load_data(ind);
end
parfor ind=1:4
process_longtime(data{ind});
end
As there's a memory overflow. My hypothesis is, that Matlab tries to copy the whole data variable to every worker.
If this is correct - is there a way to distribute data into 4 (or n) parts to the workers, so they do not need access to the whole data variable?
This question already has an answer here:
Importing Data to Matlab
(1 answer)
Closed 8 years ago.
My data is a mixture of strings and values tab delimited. 'Importdata' is working pretty well but doesn't have a higher precision than 4 digits. How can I fix that, because I really need more?
Thanks in advance!
Matlab shows you by default only a precision of 4 digits - but calculates with much more digits internally.
Try
format long
to see a more precise representation of your data
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I want to choose a encoding scheme for data storage. I have very low available memory. which coding should be best to optimally utilize available space.
ANSI, UTF or any other..
Data is the Capital Alphabetics
If you know the frequency distribution of letters, Huffman Coding is a good balance between complexity, speed and efficiency.
If you don't know the distribution of letters or they are random, just store them 5 bits at a time. For example, consider the string "ABCDE". The letter numbers are 0, 1, 2, 3, 4. Converted to binary, this is:
00000 00001 00010 00011 00100
Now you just group every 8 bits into bytes:
00000000 01000100 00110010 0xxxxxxx
You need to store the length too, so that you know that there is no useful data in the last byte's 7 bits.
If code space is of no concern and you just want to pack the strings as well as you can, you could use Huffman coding or Arithmetic coding even with a uniform frequency distribution to pack each character into log2(26) bits on average, which is slightly less than 5 (namely, 4.7 bits).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I didnt understand this because 2^32 is 4 giga bits not bytes right ? since 2^2 * 1024* 1024* 1024 bits right ? Am I wrong ?
The smallest individually addressable unit of memory is a byte. Bits don't have addresses. You have to read a byte or more and then do bit masking and such to get at the individual bits.
As far as i can recall from my college days, this is how it goes
If 32 = size of the address bus, then the total number of memory addresses that can be addressed = 2^32 = 4294967296
However, these are 4294967296 Addresses of memory locations. Since each memory location itself = 1 Byte, hence this gives us 4294967296 bytes that can be addressed.
Hence 4GB Memory can be addressed.
No, it is Gigabytes. A byte has 8 bits so you have to multiply the resulting number by 8 to get the bits. As john said in his answer you cant address individual bits, you will have to do bit shifting and masking to get to individual bits.
In the old console days SNES and Megadrive games were measured in MegaBits because by definition an 8MegaBit game sounds bigger than a 1 MegaByte game. In the end most people said 8Megs so again the confusion gave the impression of 8Megabytes for most people. Im not sure if brett is talking about SNES or Megadrive programming but remember 8 Megabits = 1 Megabyte.
the above answer solves it, and if you wish to address more then 4 gb then you can use an offset memory register, that can help you address a wider range.