so I started looking into Brainfuck and I found this line as a part of a "Hello World" Program:
-[>+<-------]>-.
This line produces the output "H". If I understood it correctly, for that to happen the current 'block' needs to have the value 72 in ASCII but as of my understanding it would do the following:
Set block [0] to 255 (because of subtracting from a block that's already equal 0)
Until [0] equals 0 add 1 to block [1] and subtract 7 from [0] (so 37 iterations)
Subtract 1 from [1] which is 37 by now
Output [1] so 36
So in the end it would be 36 but not 72. Where am I going wrong here?
It's in Until [0] equals 0 [...] subtract 7 from [0] (so 37 iterations) that you've got the mistake.
255 divided by 7 isn't 37, it's about 36.42. After 37 iterations, the loop won't end, because the cell [0] contains -4 = 252. 252 is divisible by 7, so the loop will end after counting down through that.
The issue you have is assuming the you're only wrapping around once. You're not counting down through 255, you're counting through 511, which divided by 7 is 73, which is then decremented to 72, as you state.
This assumes 8-bit cell size, but it seems that that's what the code is written for anyways. (9 bits would work too, but that's not a usual implementation.)
Related
I have read the PNG specifications too much times and still confused how I should interpret the IDAT chunk. I have it decompressed using zlib and got all of the bytes that my IDAT chunk got.
I made an example image using krita. It's an 3x2 PNG image containing a different color every pixel.
See the 3 by 2 PNG image here
According to the PNG specification about filters it says that when the first byte of the IDAT chunk is 1 the filter method that have been applied is
Filtered(byte) = Original(byte) - Original(previous_byte)
With that formula in mind I decompressed my IDAT chunk (which was 29 bytes in length to store only 6 pixels). The first byte (which is byte number 0) contains the value 1. That is where the formula comes from.
Byte# Vaue
0 1
1 224
2 215
3 200
4 227
5 241
6 48
7 2
8 36
9 225
10 1
11 253
12 255
13 195
14 245
15 182
16 244
17 232
18 245
19 57
20 0
21 0
22 0
23 0
24 0
25 0
26 0
27 0
28 0
The first pixel is supposed to be RGB(224, 215, 200) which I reconstructed with a RGB to color converter. This seems pretty much the same color as the original pixel in the image. Here are my thoughts about all the color pixels.
Pixel 1: RGB(224, 215, 200) [read from byte 1, byte2 and byte3]
Pixel 2: RGB(195, 200, 248) [because byte 4:227 byte5:241 byte6:48]
Pixel 3: RGB(197, 236, 217) [because byte 7:2 byte8:36 byte9:225]
Pixel 4: RGB(198, 233, 217) [because byte10:1 byte11:253 byte12:255]
Pixel 5: RGB(137, 222, 142) [because byte13:195 byte14:245 byte15:182]
Pixel 6: RGB(107, 198, 131) [because byte16:244 byte17:232 byte18:245]
I have used the formula to get all the values from the pixels.
Reconstructing pixel 1, 2 and 3 looks pretty much the same, but pixel 4, 5 and 6 are not what I have expected. I think I am not reading the IDAT chunk the correct way. That could explain why there are 29 bytes for only 6 pixels RGB. I expected 19 bytes because 3 times 6 is 18 and 1 byte for the filtering method.
The IHDR says that the bit depth is 8 and the color type is 2. From the table in the specifications it says that each pixel is an R, G and B triple. Could someone point me to the right direction to read the IDAT chunk and explain it's length?
Your decompressed result length of 29 is not correct, which may have lead to your confusion.
Your image is 3x2 RGB pixels. That would be 3*3 * 2 = 18 bytes of data, plus 1 extra byte per row; a total of 20 bytes. Somehow you got an extra 9 dummy bytes, not part of the compressed data.
(I reconstructed your tiny image from the larger one and happily got the exact same numbers, else the explanation would necessarily be purely theoretical. For ease, I determined the offset of the zipped data with a hex viewer.)
>>> with open ('3x2b.png','rb') as f:
... result = f.seek (0x6a)
... data = f.read()
...
>>> d = zlib.decompress(data)
>>> print ([x for x in d])
[1, 224, 215, 200, 227, 241, 48, 2, 36, 225, 1, 253, 255, 195, 245, 182, 244, 232, 245, 57]
This 'unpacks' to the following two rows, with 3 RGB pixel values each:
filter RGB RGB RGB
1 (224,215,200) (227,241,48) (2,36,225)
1 (253,255,195) (245,182,244, (232,245,57)
All these values may be relative to an earlier result: the last complete row read before it, or the pixel to its left. For the first row, you must assume a row of all zeroes; the value "left" of the first pixel must be assumed to be 0 as well.
You see the two bytes marked 'filter'? That is where you went wrong. Each row has a filter byte of its own. You used the filter byte itself for the calculation of the second row.
Adding (the inverse of the "Sub" filter as indicated by the filter 1) yields in
; start of row 0, filter is 1 and 'initial pixel' is (0,0,0)
(224,215,200) (224+227,215+241,200+48)
=(195,200,248)
(195+2,200+36,248+225)
=(197,236,217)
; restart for row 1, filter is 1 again and start value (0,0,0):
(253,255,195) (253+245,255+182,195+244)
=(242,181,183)
(242+232,181+245,183+57)
=(218,170,240)
... exactly the colors I started out with.
This is Filter 1 ("Sub") and so uses the values to its left; for Filter 2 ("Up"), you need to use the corresponding byte in the previously decoded row, and for Average and Paeth, you need both.
When I apply the function dwt2() on an image, I get the four subband coefficients. By choosing any of the four subbands, I work with a 2D matrix of signed numbers.
In each value of this matrix I want to embed 3 bits of information, i.e., the numbers 0 to 7 in decimal, in the last 3 least significant bits. However, I don't know how to do that when I deal with negative numbers. How can I modify the coefficients?
First of all, you want to use an Integer Wavelet Transform, so you only have to deal with integers. This will allow you a lossless transformation between the two spaces without having to round float numbers.
Embedding bits in integers is a straightforward problem for binary operations. Generally, you want to use the pattern
(number AND mask) OR bits
The bitwise AND operation clears out the desired bits of number, which are specified by mask. For example, if number is an 8-bit number and we want to zero out the last 3 bits, we'll use the mask 11111000. After the desired bits of our number have been cleared, we can substitute them for the bits we want to embed using the bitwise OR operation.
Next, you need to know how signed numbers are represented in binary. Make sure you read the two's complement section. We can see that if we want to clear out the last 3 bits, we want to use the mask ...11111000, which is always -8. This is regardless of whether we're using 8, 16, 32 or 64 bits to represent our signed numbers. Generally, if you want to clear the last k bits of a signed number, your mask must be -2^k.
Let's put everything together with a simple example. First, we generate some numbers for our coefficient subband and embedding bitstream. Since the coefficient values can take any value in [-510, 510], we'll use 'int16' for the operations. The bitstream is an array of numbers in the range [0, 7], since that's the range of [000, 111] in decimal.
>> rng(4)
>> coeffs = randi(1021, [4 4]) - 511
coeffs =
477 202 -252 371
48 -290 -67 494
483 486 285 -343
219 -504 -309 99
>> bitstream = randi(8, [1 10]) - 1
bitstream =
0 3 0 7 3 7 6 6 1 0
We embed our bitstream by overwriting the necessary coefficients.
>> coeffs(1:numel(bitstream)) = bitor(bitand(coeffs(1:numel(bitstream)), -8, 'int16'), bitstream, 'int16')
coeffs =
472 203 -255 371
51 -289 -72 494
480 486 285 -343
223 -498 -309 99
We can then extract our bitstream by using the simple mask ...00000111 = 7.
>> bitand(coeffs(1:numel(bitstream)), 7, 'int16')
ans =
0 3 0 7 3 7 6 6 1 0
Does anyone of you have a clue of why the following code is crashing with Index exceeds matrix dimensions. error for N_SUBJ = 17 or N_SUBJ = 14, but not for example for the values 13,15,16?
N_PICS = 7
COLR = hsv;
N_COLR = size(COLR,1);
COLR = COLR(1+[0:(N_PICS-1)]*round(N_COLR/N_PICS),:);
SUBJ_COLR = hsv;
N_SUBJ_COLR = size(SUBJ_COLR,1);
SUBJ_COLR = SUBJ_COLR(1+[0:(N_SUBJ-1)]*round(N_SUBJ_COLR/N_SUBJ),:);
And also, could somebody please explain me what it's doing exactly and how it's working?
When you say crashing, I assume you mean you are seeing the error, Index exceeds matrix dimensions.? If you are seeing this error then the matrix returned by hsv does not have enough rows for the sub-sample operation you are doing.
SUBJ_COLR = SUBJ_COLR(1+[0:(N_SUBJ-1)]*round(N_SUBJ_COLR/N_SUBJ),:);
selects a subset of the original matrix. 1+[0:(N_SUBJ-1)]*round(N_SUBJ_COLR/N_SUBJ) calculates which row to select, and : means all columns.
The matrix SUBJ_COLR is 64-by-3, thus N_SUBJ_COLR is equal to 64. You're indexing into the 64 rows of SUBJ_COLR and in some cases the particular index is greater than the number of row, resulting in a Index exceeds matrix dimensions. error. So the question is really why does this snippet
1+[0:(N_SUBJ-1)]*round(N_SUBJ_COLR/N_SUBJ)
evaluate to numbers greater than 64 for some values of N_SUBJ? This expression can be rewritten as:
1+(0:round(64/N_SUBJ):round(64/N_SUBJ)*(N_SUBJ-1))
or
1:round(64/N_SUBJ):round(64/N_SUBJ)*(N_SUBJ-1)+1
where I've replaced N_SUBJ_COLR by 64 for clarity. This latter expression more clearly shows what the largest index in the vector will be and how it depends on the value of N_SUBJ. You can print out this largest index as a function of N_SUBJ:
N_SUBJ = 1:30;
round(64./N_SUBJ).*(N_SUBJ-1)+1
which returns
ans =
Columns 1 through 13
1 33 43 49 53 56 55 57 57 55 61 56 61
Columns 14 through 26
66 57 61 65 69 55 58 61 64 67 70 73 51
Columns 27 through 30
53 55 57 59
As you can see, there are several values that exceed 64. This nonlinear behavior comes down to the use of round. The integers created by the round part don't appear to get small enough fast enough as they multiply (N_SUBJ-1) which is growing in order to keep the total term less than 64. One option might be to replace round with floor, but there are probably other ways.
I'm trying to output a Matrix:
M = [1 20 3; 22 3 24; 100 150 2];
Using:
for i=1:3
fprintf('%f\t%f\t%f\n', M(i), M(i+length(M)), M(i+length(M)*2));
end
And the output is turning out something like:
1 20 3
22 3 24
100 150 2
Which is obviously not great. How can I get it so the front of integers are padded with spaces? Like so:
1 20 3
22 3 24
100 150 2
Any ideas?
Thanks!
You can use string formatting to allocate specific number of characters per displayed number.
For example
fprintf('% 5d\n', 12)
prints 12 in 5 characters, padding the un-used 3 leading characters with spaces.
You can use num2str (optionally with format string %f) and apply it to the whole matrix instead of each row so that you get the right padding:
disp(num2str(M));
returns
1 20 3
22 3 24
100 150 2
This question already has answers here:
Vectorizing the Notion of Colon (:) - values between two vectors in MATLAB
(4 answers)
Closed 9 years ago.
This is probably simple but here is my problem.
I have two vectors, starts and ends. Starts are the starting points of sequences of consecutive numbers and ends are the end points of sequences of consecutive numbers. I would like to create a vector which contains these runs.
So for example, say
starts = [2 7 10 18 24]
ends = [5 8 15 20 30]
I would like to create the following vector
ans = [2 3 4 5 7 8 10 11 12 13 14 15 18 19 20 24 25 26 27 28 29 30]
Using starts:end only uses the first element of each vector
I would also like to do this without using a (for) loop in order to keep it as fast as possible!
Thanks for reading
Chris
Assuming there's always the same number of start and end points, and they always match (e.g. the nth start corresponds to the nth end), then you can do
cell2mat(arrayfun(#(s,e) (s:e), starts, ends, 'UniformOutput', false))
For a bit more detailed explanation, the arrayfun(#(s,e) (s:e), starts, ends, 'UniformOutput', false) part will generate a sequence of n cell arrays, where n is the length of the starts and ends vectors, such that each cell array has the sequence starts(i):ends(i) corresponding to the ith elements of the two vectors. Then the cell2mat function will fuse each of the individual cell arrays into 1 larger matrix.
When you're worried about making it fast, preallocate:
starts = [2 7 10 18 24]
ends = [5 8 15 20 30]
a = zeros(1,sum(ends)+numel(ends)-sum(starts));
% or a = zeros(1,sum(ends+1-starts))
j = 1;
for i = 1:numel(ends)
j2 = j+ends(i)-starts(i);
a(j:j2) = (starts(i):ends(i));
j = j2+1;
end