LED Screen recognition in image using MATLAB - matlab

I'm trying to detect the screen border from the image (In need the 4 corners).
This is the Image:
I used HOUGH transform to detect lines and intersection points (the black circles) and this is the result:
Now I need to find the 4 corners or the 4 lines.. everything that will help me to crop the image, What can I do?
Maybe use the screen aspect ratio? but how?
I'm using Matlab.
Thanks.

A naive first approach that would do the trick if and only if you have same image conditions (background and laptop).
Convert your image to HSV (examine that in HSV the image inside the
screen is the only portion of the image with high Saturation, Value
values)
Create a mask by hard thresholding the Saturation and Value channels
Dilate the mask to connect disconnected regions
Compute the convex hull to get the mask boundaries
See a quick result:
Here is the same mask with the original image portion that makes it through the mask:
Here is the code to do so:
img = imread( 'imagename.jpg'); % change the image name
hsv = rgb2hsv( img);
mask = hsv(:,:,2)>0.25 & hsv(:,:,3)>0.5;
strel_size = round(0.025*max(size(mask)));
dilated_mask=imdilate(mask,strel('square',strel_size));
s=regionprops(dilated_mask,'BoundingBox','ConvexHull');
% here Bounding box produces a box with the minimum-maximum white pixel positions but the image is not actually rectangular due to perspective...
imshow(uint8(img.*repmat(dilated_mask,[1 1 3])));
line(s.ConvexHull(:,1),s.ConvexHull(:,2),'Color','red','LineWidth',3);
You may, of course, apply some more sophisticated processing to be a lot more accurate and to correct the convex hull to be just a rectangular shape with perspective, but this is just a 5 minutes attempt just to showcase the approach...

Related

Region of interest extraction in MATLAB

I am writing a MATLAB code to implement a specific filter on a selected (from auto ROI) grayscale region of a forearm image which consists of veins. I also uploaded the forearm of a subject (after foreground has extracted).
Basically, I have NIR camera images of the forearm of different subjects with different orientations. I wrote the code that has extracted the foreground grayscale image of the arm, that gave me the white background with the forearm. I used Sobel edge to find edges. I also found the nonzero indices using the find function. I got the row and col indices. I need an idea on how to extract image inside (almost 10 pixels) of the edges detected on both sides of the forearm (black and white edged image-also uploaded).
Sobel-edge:
Foreground image:
ROI image that I need to extract:
clear all
close all
clc
image= rgb2gray(imread('Subj1.jpg'));
image1=~im2bw(image,0.1);
image1=im2uint8(image1);
foreground=imadd(image1,image);
imshow(foreground);
edgesmooth=medfilt2(foreground);
sobeledge= edge(edgesmooth,'sobel');
sobeledge=im2uint8(sobeledge);
figure
imshow(sobeledge);
[col,row]=find(sobeledge~=0);
Starting from the mask image that you make here:
image1=~im2bw(image,0.1);
but inverted, such that the mask is zero for the background and non-zero for the foreground:
image1 = im2bw(image,0.1);
you can use imdilate to expand it by a fixed distance:
se = strel('disk',20); % This will extend by 20/2=10 pixels
image2 = imdilate(image1,se);
image2 will be like image1, but expanded by 10 pixels in all directions.
imerode does the opposite, it shrinks regions.

How to create an inverse gray scale?

I have an image with dark blue spots on a black background. I want to convert this to inverse gray scale. By inverse, I mean, I want the black ground to be white.
When I convert it to gray scale, it makes everything look black and it makes it very hard to differentiate.
Is there a way to do an inverse gray scale where the black background takes the lighter shades?
Or, another preferable option is to represent the blue as white and the black as black.
I am using img = rgb2gray(img); in MATLAB for now.
From mathworks site:
IM2 = imcomplement(IM)
Is there a way to do an inverse gray scale where the black
background takes the lighter shades?
Based on your image description I created an image sample.png:
img1 = imread('sample.png'); % Read rgb image from graphics file.
imshow(img1); % Display image.
Then, I used the imcomplement function to obtain the complement of the original image (as suggested in this answer).
img2 = imcomplement(img1); % Complement image.
imshow(img2); % Display image.
This is the result:
Or, another preferable option is to represent the blue as white and
the black as black.
In this case, the simplest option is to work with the blue channel. Now, depending on your needs, there are two approaches you can use:
Approach 1: Convert the blue channel to a binary image (B&W)
This comment suggests using the logical operation img(:,:,3) > 0, which will return a binary array of the blue channel, where every non-zero valued pixel will be mapped to 1 (white), and the rest of pixels will have a value of 0 (black).
While this approach is simple and valid, binary images have the big disadvantage of loosing intensity information. This can alter the perceptual properties of your image. Have a look at the code:
img3 = img1(:, :, 3) > 0; % Convert blue channel to binary image.
imshow(img3); % Display image.
This is the result:
Notice that the round shaped spots in the original image have become octagon shaped in the binary image, due to the loss of intensity information.
Approach 2: Convert the blue channel to grayscale image
A better approach is to use a grayscale image, because the intensity information is preserved.
The imshow function offers the imshow(I,[low high]) overload, which adjusts the color axis scaling of the grayscale image through the DisplayRange parameter.
One very cool feature of this overload, is that we can let imshow do the work for us.
From the documentation:
If you specify an empty matrix ([]), imshow uses [min(I(:)) max(I(:))]. In other words, use the minimum value in I as black, and the maximum value as white.
Have a look at the code:
img4 = img1(:, :, 3); % Extract blue channel.
imshow(img4, []); % Display image.
This is the result:
Notice that the round shape of the spots is preserved exactly as in the original image.

Matlab : ROI substraction

I'm learning about statistical feature of an image.A quote that I'm reading is
For the first method which is statistical features of texture, after
the image is loaded, it is converted to gray scale image. Then the
background is subtracted from the original image. This is done by
subtract the any blue intensity pixels for the image. Finally, the ROI
is obtained by finding the pixels which are not zero value.
The implementation :
% PREPROCESSING segments the Region of Interest (ROI) for
% statistical features extraction.
% Convert RGB image to grayscale image
g=rgb2gray(I);
% Obtain blue layer from original image
b=I(:,:,3);
% Subtract blue background from grayscale image
r=g-b;
% Find the ROI by finding non-zero pixels.
x=find(r~=0);
f=g(x);
My interpretation :
The purpose of substracting the blue channel here is related to the fact that the ROI is non blue background? Like :
But in the real world imaging like for example an object but surrounded with more than one colors? What is the best way to extract ROI in that case?
like for example (assuming only 2 colors on all parts of the bird which are green and black, & geometri shaped is ignored):
what would I do in that case? Also the picture will be transformed to gray scale right? while there's a black part of the ROI (bird) itself.
I mean in the bird case how can I extract only green & black parts? and remove the rest colors (which are considered as background ) of it?
Background removal in an image is a large and potentielly complicated subject in a general case but what I understand is that you want to take advantage of a color information that you already have about your background (correct me if I'm wrong).
If you know the colour to remove, you can for instance:
switch from RGB to Lab color space (Wiki link).
after converting your image, compute the Euclidean from the background color (say orange), to all the pixels in your image
define a threshold under which the pixels are background
In other words, if coordinates of a pixel in Lab are close to orange coordinates in Lab, this pixel is background. The advantage of using Lab is that Euclidean distance between points relates to human perception of colours.
I think this should work, please give it a shot or let me know if I misunderstood the question.

Creating intensity band across image border using matlab

I have this image (8 bit, pseudo-colored, gray-scale):
And I want to create an intensity band of a specific measure around it's border.
I tried erosion and other mathematical operations, including filtering to achieve the desired band but the actual image intensity changes as soon as I use erosion to cut part of the border.
My code so far looks like:
clear all
clc
x=imread('8-BIT COPY OF EGFP001.tif');
imshow(x);
y = imerode(x,strel('disk',2));
y1=imerode(y,strel('disk',7));
z=y-y1;
figure
z(z<30)=0
imshow(z)
The main problem I am encountering using this is that it somewhat changes the intensity of the original images as follows:
So my question is, how do I create such a band across image border without changing any other attribute of the original image?
Going with what beaker was talking about and what you would like done, I would personally convert your image into binary where false represents the background and true represents the foreground. When you're done, you then erode this image using a good structuring element that preserves the roundness of the contours of your objects (disk in your example).
The output of this would be the interior of the large object that is in the image. What you can do is use this mask and set these locations in the image to black so that you can preserve the outer band. As such, try doing something like this:
%// Read in image (directly from StackOverflow) and pseudo-colour the image
[im,map] = imread('http://i.stack.imgur.com/OxFwB.png');
out = ind2rgb(im, map);
%// Threshold the grayscale version
im_b = im > 10;
%// Create structuring element that removes border
se = strel('disk',7);
%// Erode thresholded image to get final mask
erode_b = imerode(im_b, se);
%// Duplicate mask in 3D
mask_3D = cat(3, erode_b, erode_b, erode_b);
%// Find indices that are true and black out result
final = out;
final(mask_3D) = 0;
figure;
imshow(final);
Let's go through the code slowly. The first two lines take your PNG image, which contains a grayscale image and a colour map and we read both of these into MATLAB. Next, we use ind2rgb to convert the image into its pseudo-coloured version. Once we do this, we use the grayscale image and threshold the image so that we capture all of the object pixels. I threshold the image with a value of 10 to escape some quantization noise that is seen in the image. This binary image is what we will operate on to determine those pixels we want to set to 0 to get the outer border.
Next, we declare a structuring element that is a disk of a radius of 7, then erode the mask. Once I'm done, I duplicate this mask in 3D so that it has the same number of channels as the pseudo-coloured image, then use the locations of the mask to set the values that are internal to the object to 0. The result would be the original image, but having the outer contours of all of the objects remain.
The result I get is:

How to identify boundaries of a binary image to crop in matlab?

How to identify boundaries of a binary image to crop in matlab?
ie. the input binary image has no noises. only has one black object in white background.
You can use the edge command in MATLAB.
E = edge(I);
I would be an input grayscale or binary image. This will return a binary image with only the edges.
This can provide further assistance:
http://www.mathworks.com/help/images/ref/edge.html
If your image is just black-and-white and has a single object, you can likely make use of the Flood fill algorithm, for which Matlab has built-in support!
Try the imfill function (ref).
This should give you the extents of the object, which would allow you to crop at will.
You can also invert the image, then do regionprops to extract all of the properties for separate objects. You need to invert the image as regionprops assumes that the objects are white while the background is black. A good thing about this approach is that it generalizes for multiple objects and you only need about a few lines of code to do it.
As an example, let's artificially create a circle in the centre of an image that is black on a white background as you have suggested. Let's assume this is also a binary image.
im = true(200, 200);
[X,Y] = meshgrid(1:200, 1:200);
ind = (X-100).^2 + (Y-100).^2 <= 1000;
im(ind) = false;
imshow(im);
This is what your circle will look like:
Now let's go ahead and invert this so that it's a white circle on black background:
imInvert = ~im;
imshow(imInvert);
This is what your inverted circle will look like:
Now, invoke regionprops to find properties of all of the objects in our image. In this case, there should only be one.
s = regionProps(imInvert, 'BoundingBox');
As such, s contains a structure that is 1 element long, and has a single field called BoundingBox. This field is a 4 element array that is structured in the following way:
[x y w h]
x denotes the column/vertical co-ordinate while y denotes the row/horizontal co-ordinate of the top-left corner of the bounding box. w,h are the width and height of the rectangle. Our output of the above code is:
s =
BoundingBox: [68.5000 68.5000 63 63]
This means that the top-left corner of our bounding box is located at (x,y) = (68.5,68.5), and has a width and height of 63 each. Therefore, the span of our bounding box goes from rows (68.5,131.5) and columns (68.5,131.5). To make sure that we have the right bounding box, you can draw a rectangle around our shape by using the rectangle command.
imshow(im);
rectangle('Position', s.BoundingBox);
This is what your image will look like with a rectangle drawn around the object. As you can see, the bounding box given from regionprops is the minimum spanning bounding box required to fully encapsulate the object.
If you wish to crop the object, you can do the following:
imCrop = imcrop(imInvert, s.BoundingBox);
This should give you the cropped image that is defined by the bounding box that we talked about earlier.
Hope this is what you're looking for. Good luck!