Based on comments from this topic https://stackoverflow.com/a/35791309/187650 I made a question about it as it is not clear yet.
I have a bitmap of 9px (width) x 17px (height) RGB.
The goal is to encode it as jpeg using halve chroma subsampling in vertical direction with quality default (50) using 3 components.
For the halve chroma subsampling in vertical direction I should use an MCU with 8x16 (width x height).
Steps I do:
convert RGB to YCbCr image (3 components: Y, Cb, Cr).
make 2dim. array of the Y, Cb and Cr components bigger so that I can create full 8x16 MCU blocks.
split in MCU blocks of 8x16 --> get Y0Y1CbCr 8x8 blocks
use the Y0Y1CbCr 8x8 blocks to DCT
...
In step 2 I'm having for Y, Cb and Cr a 2dim. array size of [17, 9].
I calculate how many rows and columns I should add to have full MCU blocks of 8x16 and as a result I create a new array of size [32, 16] with the original values of the smaller array.
Now is the problem: how do I padd or fill the extra values, because the new array is bigger.
padding with zeros (0's) ?
padding with ones (1's) ?
padding with last row and last column? (first pad row or first columns?)
padding with another value?
Original bitmap:
Result padding with zeros:
Result padding with ones:
As you can see the result gives a green color at the bottom when padding with 0 or 1.
Any advice is welcome.
Related
Suppose we have an 5x5 size image and a 3x3 size kernel with
Stride 2 and Padding On. What is the size of the output image after passing through a convolution layer in neural networks.
The other answer is correct, but here is a drawing which visualizes why this formula holds:
I: Image size, K: Kernel size, P: Padding, S: Stride
I will explain the formula for a single direction only (shifting the filter to the right), since its the same principle for the other direction.
Imagine, you place the kernel (the filter) in the upper left corner of the padded image.
Then there are I-K+2P pixels left over on the right hand side. If your stride is S, you will be able to place the kernel on this remaining part at floor( (I-K+2*P)/S ) positions. You can verify that you need "floor" for an image which has 4x4 pixels. You have to add one for the initial position of the kernel, to get the total number of kernel-positions.
Thus there are floor( (I-K+2*P)/S ) + 1 positions in total - which is the formula for your output size.
Hope that helps.
Let's consider a more general case:
Input is an image with size I*I. The input is padded with P*P pixels. The kernel has K*K size, and the strides are S*S. Then, the output has a O*O size which can be computed using a simple formula:
O = [(I+2*P-K)/S]+1; where [] shows the floor function.
So, you're answer is 3*3 since O=[(5+2*1-3)/2]+1=3.
I was working with this picture in Matlab to detect the color of the circles. This is a 512 by 512 jpeg image.
I am finding the centers of circles using imfindcircles, then I am taking the R, G, B components of some points near the center of each circle to detect color.
But, I am confused because, for both the red and white circles, I find that the R, G, B components are same [239 227 175].
I am new to image processing, so can anyone explain what's actually happening here.
The centers output of imfindcircles gives the coordinates of the centers in x/y coordinates and you need to index into your image using row/column coordinates so you need to be sure to reverse the two columns when indexing into the image
centers = imfindcircles(IM);
center1 = IM(centers(1,2), centers(1,1),:);
center2 = IM(centers(2,2), centers(2,1),:);
Presumably you are not doing this because you are instead sampling pixels from the background which obviously result in the same RGB values for the centroids.
Update
It appears that actual issue is that you are casting the location of the centroids to a uint8 to make it an integer value so you can then use it as an index. The maximum integer representable by uint8 is 255 and the number of rows and columns in your image is large than 255 (and so are the centroids) so they will get truncated to 255 resulting in the wrong pixel being sampled.
Rather than using uint8, just use round to round the centroids to their nearest integers
cX = round(centers(n_c,1));
cY = round(centers(n_c,2));
I have a black and white image . The image is of 280x420 pixels, and each pixel is a value within the interval [0,1]. I want to divide the picture in blocks of size (10-by-20), this will give 28x21 (= 588) blocks.
((In order to create the input vectors, each block should be written
as (200-by-1) vectors, where each column of the blocks has been added
to the previous one. In this way we will have 588 input vectors of size
200.))
How to do that and what code should I use??
thanks
Working on 2D Rectangular Nesting. Need to find the utilization percentage of the material. Assuming i have the length, breadth, left-bottom position of each rectangle. What is the best way to determine the Boundary-Cut Utilization?
Objective:- To find the AREA under the RED Line.
Sample images attached to depict what i have done and what i need.
What i have done
what i need
Another Example image of rectangles packed with allowance
If you're interested in determining the total "area" underneath the red line, one suggestion I have is if you have access to the Image Processing Toolbox, simply create a binary image where we draw all of the rectangles on the image at once, fill all of the holes, then to determine the area, just determine the total sum of all of the binary "pixels" in the image. You said you have the (x,y) positions of the bottom-left corner of each rectangle, as well as the width and height of each rectangle. To make this compatible in an image context, the y axis is usually flipped so that the top-left corner of the space is the origin instead of the bottom-left. However, this shouldn't affect our analysis as we are simply reflecting the whole 2D space downwards.
Therefore, I would start with a blank image that is the same size as the grid you are dealing with, then writing a loop that simply sets a rectangular grid of coordinates to true for each rectangle you have. After, use imfill to fill in any of the holes in the image, then calculate the total sum of the pixels to get the area. The definition of a hole in an image processing context is any black pixels that are completely surrounded by white pixels. Therefore, should we have gaps that are surrounded by white pixels, these will get filled in with white.
Therefore, assuming that we have four separate variables of x, y, width and height that are N elements long, where N is the number of rectangles you have, do something like this:
N = numel(x); %// Determine total number of rectangles
rows = 100; cols = 200; %// Define dimensions of grid here
im = false(rows, cols); %// Declare blank image
%// For each rectangle we have...
for idx = 1 : N
%// Set interior of rectangle at location all to true
im(y(idx)+1:y(idx)+height(idx), x(idx)+1:x(idx)+width(idx)) = true;
end
%// Fill in the holes
im_filled = imfill(im, 'holes');
%// Determine total area
ar = sum(im_filled(:));
The indexing in the for loop:
im(y(idx)+1:y(idx)+height(idx), x(idx)+1:x(idx)+width(idx)) = true;
Is a bit tricky to deal with. Bear in mind that I'm assuming that y accesses the rows of the image and x accesses the columns. I'm also assuming that x and y are 0-based, so the origin is at (0,0). Because we access arrays and matrices in MATLAB starting at 1, we need to offset the coordinates by 1. Now, the beginning index for the row starts from y(idx)+1. We end at y(idx) + height(idx) because we technically start at y(idx)+1 but then we need to go up to height(idx) but then we also subtract by 1 as your coordinates begin at 0. Take for example a line with the width of 20, from x = 0 to x = 19. This width is 20, but we draw from 0, up to 20-1 which is 19. Because of the indexing starting at 1 for MATLAB, and the subtraction of 1 due to the 0 indexing, the +1 and -1 cancel, which is why we are just left with y(idx) + height(idx). The same can be said with the x coordinate and the width.
Once we draw all of the rectangles in the image, we use imfill to fill up the holes, then we can sum up the total area by just unrolling the whole image into a single vector and invoking sum. This should (hopefully) get what you need.
Now, if you want to find the area without the filled in holes (I suspect this is what you actually need), then you can skip the imfill step. Simply apply the sum on the im, instead of im_filled, and so:
ar = sum(im(:));
This will sum up all of the "white" pixels in the image, which is effectively the area. I'm not sure what you're actually after, so use one or the other depending on your needs.
Boundary-Cut Area without using Image Processing Toolbox.
The Detailed question description and answer could be found here
This solution is applicable only to Rectangular parts.
I need to transform thermographic images into gray-scale to detect temperature variation using matlab.
I tried to use the inbuilt function rgb2gray(), but it's inefficient for my image database, so I tried to compute the formula for this, using correlation between colours and temperature and then interpolating it, but then will be it possible to get the efficient mathematical formula?
My formula:
i=0.29 * rgb_img(:,:,1) + 0.59 * rgb_img(:,:,2) + 0.11 *
rgb_img(:,:,3);
and
i=0.6889*rgb_img(:,:,1)+.5211*rgb_img(:,:,2)+.4449*rgb_img(:,:,3)+72.8;
but both of these didn't help.
sample images are:
http://www.google.com/imgres?imgurl=http://www.breastthermography.com/images/AP-Breast-Case-2.jpg&imgrefurl=http://www.breastthermography.com/case_studies.htm&h=199&w=300&tbnid=48Yto8Y8RtRPjM:&zoom=1&docid=PX1nchTaFQpa3M&ei=ftVdVIatNZCQuATptoLgCA&tbm=isch
There is small vertical multicolor bar at the right edge of images. It represents palette which has been used for Gray->Color transformation. If you can no access to exact formula for this thansform, you can try approximation: extract 1-pixel wide vertical line to array from bottom to top.
If resulting array length is not equal to 256, adjust the range (interpolate if needed). Now you have map: Color -> Index of this Color = 8-bit Gray Value. Use it to construct grayscale image. If you need absolute temperature, you can adjust range to 280-350 (in 0.1 degrees) or another suitable range.
The formula used in JFIF is
Y = 0.299R +0.587G +0.114B
That's fairly close to what you have. So I do know what you mean when you say what you were using did not work.