How would you go about getting the absolute of a number in brainfuck?
I originally thought squaring the number ([->+>+<<]>>[-<<+>>]<<[>[->+>+<<]>>[-<<+>>]<<<-]) and square rooting it would work, but I can't think of a way to square root.
One way to compute the absolute of a number is to identify its sign. If you know how your program represents negative numbers, there is a reddit answer by /u/danielcristofani that explains that, in order to check for the sign of a number, you can
double the number and see if it becomes zero in the process, e.g. with memory layout 0 0 x 0, this should work, producing 0 f 0 0 where f is the sign flag [>++[<]<[[-]+<+<]>>-]>[-]<
If necessary, you should then apply an x = -x algorithm, for example its wrapping version:
temp0[-]
x[temp0-x-]
temp0[x-temp0+]
We can do a bit better than that. Beginning with 0 x 0 0 with the pointer at x,
[<++[>->]>[[<-->-]>+>]<<]<[>>+<<--]
results in 0 0 |x| 0 with the pointer at the leftmost cell of the four. This assumes the number was in the usual two's complement representation, where 255 = -1 and 254 = -2 and so on (if cells are bytes). Which in turn assumes that in handling numbers up to this point, the program has been careful not to overflow past 127 or underflow past -128. Doable, but depending what's being done with these numbers it might be better to represent the sign separately from the beginning.
Related
I was testing a operation like this:
[input] 3.9/0.1 : 4.1/0.1
[output] 39 40
don't know why 4.1/0.1 is approximated to 40. If I add a round(), it will go as expected:
[input] 3.9/0.1 : round(4.1/0.1)
[output] 39 40 41
What's wrong with the first operation?
In this Q&A I go into detail on how the colon operator works in MATLAB to create a range. But the detail that causes the issue described in this question is not covered there.
That post includes the full code for a function that imitates exactly what the colon operator does. Let's follow that code. We start with start = 3.9/0.1, which is exactly 39, and stop = 4.1/0.1, which, due to rounding errors, is just slightly smaller than 41, and step = 1 (the default if it's not given).
It starts by computing a tolerance:
tol = 2.0*eps*max(abs(start),abs(stop));
This tolerance is intended to be used so that the stop value, if within tol of an exact number of steps, is still used, if the last step would step over it. Without a tolerance, it would be really difficult to build correct sequences using floating-point end points and step sizes.
However, then we get this test:
if start == floor(start) && step == 1
% Consecutive integers.
n = floor(stop) - start;
elseif ...
If the start value is an exact integer, and the step size is 1, then it forces the sequence to be an integer sequence. Unfortunately, it does so by taking the number of steps as the distance between floor(stop) and start. That is, it is not using the tolerance computed earlier in determining the right stop! If stop is slightly above an integer, that integer will be in the range. If stop is slightly below an integer (as in the case of the OP), that integer will not be part of the range.
It could be debated whether MATLAB should round the stop number up in this case or not. MATLAB chose not to. All of the sequences produced by the colon operator use the start and stop values exactly as given by the user. It leaves it up to the user to ensure the bounds of the sequence are as required.
However, if the colon operator hadn't special-cased the sequence of integers, the result would have been less surprising in this case. Let's add a very small number to the start value, so it's not an integer:
>> a = 3.9/0.1 : 4.1/0.1
a =
39 40
>> b = 3.9/0.1 + eps(39) : 4.1/0.1
b =
39.0000 40.0000 41.0000
Floating-point numbers suffer from loss of precision when represented with a fixed number of bits (64-bit in MATLAB by default). This is because there are infinite number of real numbers (even within a small range of say 0.0 to 0.1). On the other hand, a n-bit binary pattern can represent a finite 2^n distinct numbers. Hence, not all the real numbers can be represented. The nearest approximation will be used instead, resulted in loss of accuracy.
The closest representable value for 4.1/0.1 in the computer as a 64-bit double precision floating point number is actually,
4.1/0.1 ≈ 40.9999999999999941713291207...
So, in essence, 4.1/0.1 < 41.0 and that's what you get from the range. If you subtract, for example, 41 - 4.1/0.1 = 7.105427357601002e-15. But when you round, you get the closest value of 41.0 as expected.
The representation scheme for 64-bit double-precision according to the IEEE-754 standard:
The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for negative numbers.
The following 11 bits represent exponent (E).
The remaining 52 bits represents fraction (F).
I am working in channel coding part. My major is encoding with bit information. For example, given a input bit is x=[1 0 1] and G matrix is
G =
1 0 0 1
0 0 1 0
1 1 0 1
then encoding symbol will be y=mod(x*G,2)% mod to convert to binary bit
y =
0 1 0 0
It is very simple idea for bits encoding processing in channel coding. Now I want to do map this work to packet encoding. Some paper mentioned that we can do this way for packet encoding if we consider each packet is n bytes. For example, I have a text file that size is 3000bytes. Now I will divide the file into 1000 packet. Hence, each packet has size is that 3 bytes. My problem is that I done with bit encoding. However, I don't have any idea to work with packet encoding. Please let me know if you worked with this problem. I hear that bit encoding we only has two level 0 and 1, so we can use GF=2. If we work for packet level, we must consider GF>2. I am not sure it
Second question is that how about if packet size equals 50 bytes instead of 3 bytes?
First, when you talk about GF(p^m), you mean Galois Field, where p is a prime number that indicates the degree of the polynomial. Hence, it is possible to define a field with pm elements. In your case, p=2 and m=3. Once, you have the polynomial, you can apply it using two different encoding schemes:
Block coding. You can split the packet into several blocks of 3 bits and encode each block independently. Use zero-padding in case they are not multiples.
Convolutional coding. You apply the polynomial as a sliding algorithm.
The second approach is more robust and has lower delays.
The GF terminology is used to work with polynomials, the concepts of extended Galois Fields and polynomials are not very complicated, but it takes time to understand it. However, in practice, if you have a polynomial expression you just apply it to the bit sequence in the fashion that you want, block or convolutional (or fountain).
You matrix G is actually defining 4 polynomials for the input X=[1 x x^2], namely Y1=1+x^2, Y2=x^2 and Y3=x, Y4=1+x^2
I have two vectors in Matlab, z and beta. Vector z is a 1x17:
1 0.430742139435890 0.257372971229541 0.0965909090909091 0.694329541928697 0 0.394960106863064 0 0.100000000000000 1 0.264704325268675 0.387774594078319 0.269207605609567 0.472226643323253 0.750000000000000 0.513121013402805 0.697062571025173
... and beta is a 17x1:
6.55269487769363e+26
0
0
-56.3867588816768
-2.21310778926413
0
57.0726052009847
0
3.47223691057151e+27
-1.00249317882651e+27
3.38202232046686
1.16425987969027
0.229504956512063
-0.314243264212449
-0.257394312588330
0.498644243389556
-0.852510642195370
I'm dealing with some singularity issues, and I noticed that if I want to compute the dot product of z*beta, I potentially get 2 different solutions. If I use the * command, z*beta = 18.5045. If I write a loop to compute the dot product (below), I get a solution of 0.7287.
summation=0;
for i=1:17
addition=z(1,i)*beta(i);
summation=summation+addition;
end
Any idea what's going on here?
Here's a link to the data: https://dl.dropboxusercontent.com/u/16594701/data.zip
The problem here is that addition of floating point numbers is not associative. When summing a sequence of numbers of comparable magnitude, this is not usually a problem. However, in your sequence, most numbers are around 1 or 10, while several entries have magnitude 10^26 or 10^27. Numerical problems are almost unavoidable in this situation.
The wikipedia page http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems shows a worked example where (a + b) + c is not equal to a + (b + c), i.e. demonstrating that the order in which you add up floating point numbers does matter.
I would guess that this is a homework assignment designed to illustrate these exact issues. If not, I'd ask what the data represents to suss out the appropriate approach. It would probably be much more productive to find out why such large numbers are being produced in the first place than trying to make sense of the dot product that includes them.
as you know single number will save in memory by following format:
(-1)^s * 1.f * 2^e:
and zero will save like that: 1.0000000000000000 * 2 ^ -126
now If I multiply it to another floating point number like 3.37 (-1) ^ 0 * 1.10101111 * 2 ^ 128
it will not 0 it reality,but in computer it will be 0 ,how and why?
As pointed out here (Wikipedia, sorry ...), there are special values for the exponent which are treated differently. If the exponent is zero, the formula for calculating the value of the number is
(-1)^s * 0.f * 2^(-126) # notice 0.f instead of 1.f for other exponents
So, a floating point zero has simply all bits set to zero (i.e. f=0, s=0, e=0). The multiplication algorithms of course have to take care of this "special" exponent and set the result to zero in this case (more specifically to +Zero or -Zero accordingly ...)
Zero is (typically) a special case in floating point representations, and in IEEE floating point, zero is represented as 0.0 * 2 ^ -126 (or whatever the exponent is—it really doesn't matter).
I'll say that the math unit of the cpu has some optimization for the "special" floating point numbers, like NaN, Infinity and 0 (and note that technically in IEEE binary fp there are two 0, a positive and a negative one) and know what to do in the three cases.
If you are interested, here http://steve.hollasch.net/cgindex/coding/ieeefloat.html there is one table that shows what happens when you sum/multiply the "special" numbers between themselves.
why: floating-point numbers set isn't continuous like R set in Math. Therefore some nubers can't be visualized correctly and rounded to the neares possible visualizable number
how: it's being rounded :)
Rounding errors. Computers are finite
I was wondering what is better style / more efficient:
x = linspace(-1, 1, 100);
or
x = -1:0.01:1;
As Oli Charlesworth mentioned, in linspace you divide the interval [a,b] into N points, whereas with the : form, you step-out from a with a specified step size (default 1) till you reach b.
One thing to keep in mind is that linspace always includes the end points, whereas, : form will include the second end-point, only if your step size is such that it falls on it at the last step else, it will fall short. Example:
0:3:10
ans =
0 3 6 9
That said, when I use the two approaches depends on what I need to do. If all I need to do is sample an interval with a fixed number of points (and I don't care about the step-size), I use linspace.
In many cases, I don't care if it doesn't fall on the last point, e.g., when working with polar co-ordinates, I don't need the last point, as 2*pi is the same as 0. There, I use 0:0.01:2*pi.
As always, use the one that best suits your purposes, and that best expresses your intentions. So use linspace when you know the number of points; use : when you know the spacing.
[Incidentally, your two examples are not equivalent; the second one will give you 201 points.]
As Oli already pointed out, it's usually easiest to use linspace when you know the number of points you want and the colon operator when you know the spacing you want between elements.
However, it should be noted that the two will often not give you exactly the same results. As noted here and here, the two approaches use slightly different methods to calculate the vector elements (here's an archived description of how the colon operator works). That's why these two vectors aren't equal:
>> a = 0:0.1:1;
>> b = linspace(0,1,11);
>> a-b
ans =
1.0e-016 *
Columns 1 through 8
0 0 0 0.5551 0 0 0 0
Columns 9 through 11
0 0 0
This is a typical side-effect of how floating-point numbers are represented. Certain numbers can't be exactly represented (like 0.1) and performing the same calculation in different ways (i.e. changing the order of mathematical operations) can lead to ever so slightly different results, as shown in the above example. These differences are usually on the order of the floating-point precision, and can often be ignored, but you should always be aware that they exist.