Histogram normalization for normalizing background changes - matlab

I recorded a sequence of depth images using Kinect v2. But the background brightness is not constant. But It keeps changing from dark to light and light to dark (i.e) .
So I was thinking to use Histogram normalization of each image in a sequence to normalise the background to the same level. Can anyone please tell me how I can do this?

Matlab has a function for histogram matching and their site has some great examples too
Just use any frame as the reference (I suggest using the first one, but there is no real reason to do so), and keep it for all the remaining frames. If you want to decrease processing time you can also try lowering the number of bins. For a uint8 image there are usually 256 bins, but as you'll see in the link reducing it still produces favorable results
I don't know if kinect images are rgb or grayscale, for this example Im assuming they are grayscale
kinect_images = Depth;
num_frames = size(kinect_images,3); %maybe 4, I don't know if kinect images
%are grayscale(3) or RGB(4)
num_of_bins = 32;
%imhistmatch is a recent addition to matlab, use this variable to
%indicate whether or not you have it
I_have_imhistmatch = true;
%output variable
equalized_images = cast(zeros(size(kinect_images)),class(kinect_images));
%stores first frame as reference
ref_image = kinect_images(:,:,1); %if rgb you may need (:,:,:,1)
ref_hist = imhist(ref_image);
%goes through every frame and matches the histof
for ii=1:1:num_frames
if (I_have_imhistmatch)
%use this with newer versions of matlab
equalized_images(:,:,ii) = imhistmatch(kinect_images(:,:,ii), ref_image, num_of_bins);
else
%use this line with older versions that dont have imhistmatch
equalized_images(:,:,ii) = histeq(kinect_images(:,:,ii), ref_hist);
end
end
implay(equalized_images)

Related

Plot true color Sentinel-2A imagery in Matlab

Through a combination of non-matlab/non-native tools (GDAL) as well as native tools (geoimread) I can ingest Sentinel-2A data either a indiviual bands or as an RGB image having employed gdal merge. I'm stuck at a point where using
imshow(I, [])
Produces a black image, with apparently no signal. The range of intensity values in the image are 271 - 4349. I know that there is a good signal in the image because when I do:
bit_depth = 2^15;
I = swapbytes(I);
[I_indexed, color_map] = rgb2ind(I, bit_depth);
I_double = im2double(I_indexed, 'indexed');
ax1 = figure;
colormap(ax1, color_map);
image(I_double)
i.e. index the image, collect a colormap, set the colormap and then call the image function, I get a likeness of the region I'm exploring (albeit very strangely colored)
I'm currently considering whether I should try:
Find a low-level description of Sentinel-2A data, implement the scaling/correction
Use a toolbox, possibly this one.
Possibly adjust ouput settings in one of the earlier steps involving GDAL
Comments or suggestions are greatly appreciated.
A basic scaling scheme is:
% convert image to double
I_double = im2double(I);
% scaling
max_intensity = max(I_double(:));
min_intensity = min(I_double(:));
range_intensity = max_intensity - min_intensity;
I_scaled = 2^16.*((I_double - min_intensity) ./ range_intensity);
% display
imshow(uint16(I_scaled))
noting the importance of casting to uint16 from double for imshow.
A couple points...
You mention that I is an RGB image (i.e. N-by-M-by-3 data). If this is the case, the [] argument to imshow will have no effect. That only applies automatic scaling of the display for grayscale images.
Given the range of intensity values you list (271 to 4349), I'm guessing you are dealing with a uint16 data type. Since this data type has a maximum value of 65535, your image data only covers about the lower 16th of this range. This is why your image looks practically black. It also explains why you can see the signal with your given code: you apply swapbytes to I before displaying it with image, which in this case will shift values into the higher intensity ranges (e.g. swapbytes(uint16(4349)) gives a value of 64784).
In order to better visualize your data, you'll need to scale it. As a simple test, you'll probably be able to see something appear by just scaling it by 8 (to cover a little more than half of your dynamic range):
imshow(8.*I);

Saving kinect depth frame (uint16) using MATLAB but why is it too dark?

Recently I work on kinect using MATLAB. I take depth frame which is in uint16 format. But when I display it or save it using MATLAB command like: imshow & imwrite respectively, it shows too dark image. But when set the display range or convert it in uint8 format it becomes brighter. But I want to save it as a brighter format without converting in uint8 format like scaling the range between 0 to 4500.
vid = videoinput('kinect',1);
vid2 = videoinput('kinect',2);
vid.FramesPerTrigger = 1;
vid2.FramesPerTrigger = 1;
% % Set the trigger repeat for both devices to 200, in order to acquire 201 frames from both the color sensor and the depth sensor.
vid.TriggerRepeat = 200;
vid2.TriggerRepeat = 200;
% % Configure the camera for manual triggering for both sensors.
triggerconfig([vid vid2],'manual');
% % Start both video objects.
start([vid vid2]);
trigger([vid vid2])
[imgDepth, ts_depth, metaData_Depth] = getdata(vid2);
f=imgDepth;
figure,imshow(f);
figure,imshow(f,[0 4500]);
imwrite(f,'C:\Users\sufi\Desktop\matlab_kinect\Data_image\output\depth\fo.tiff');
stop([vid vid2]);
When I set the display range:
Without setting the display range:
The values in a 16bit image range from 0 to 65535.
If we take a look at the histogram of your image:
We see that the max value is 7995. But that's just a few outliers. Most information is somewhere between 700 and 4300.
So all our values are in 5-10% of our value range. That makes it look very dark.
In order to make it look better for humans we have to normalize it. (Some image viewer do this automatically).
So in order to get a nicer image into your power point presentation you have two options.
a) display it in an image viewer that can display it nicely and take a screenshot
b) normalize the image in matlab and save it to a file.
You can further improve the image by removing those outliers befor normalization.
One simple way can be scaling the image based on following formula:
Pixel_value=Pixel_value/4500*65535
If you want see the exact image that you get from uint8 ; I guess the following steps will work for you.
Probably while casting the image to uint8 matlab firstly clip the values above some threshold lets say 4095=2**12-1 (i'm not sure about value) and then it makes right shifts (4 shifts in our case) to make it inside the range of 0-255.
So i guess multiplying the value of uint8 with 256 and casting it as uint16 will help you get the same image
Pixel_uint16_value= Pixel_uint8_value*256 //or Pixel_uint16_value= Pixel_uint8_value<<8
//dont forget to cast the result as uint16

Techniques for detecting small blobs in noisy images

I am trying to write a program that uses computer vision techniques to detect (and track) tiny blobs in a stream of very noisy images. The image stream comes from an dual X ray imaging setup, which outputs left and right views (different sizes because of collimating differently). My data is of two types: one set of images are not so noisy, which I am just using to try different techniques with, and the other set are noisier, and this is where the detection needs to work at the end. The image stream is at 60 Hz. This is an example of a raw image from the X ray imager:
Here are some cropped out samples of the regions of interest. The blobs that need to be detected are the small black spots near the center of the image.
Initially I started off with a simple contour/blob detection techniques in OpenCV, which were not very helpful. Eventually I moved on to techniques such as "opening" the image using morphological operators, and subsequently performing a Laplacian of Gaussian blob detection to detect areas of interest. This gave me better results for the low-noise versions of the images, but fails when it comes to the high-noise ones: gives me too many false positives. Here is a result from a low-noise image (please note input image was inverted).
The code for my current LoG based approach in MATLAB goes as below:
while ~isDone(videoReader)
frame = step(videoReader);
roi_frame = imcrop(frame, [660 410 120 110]);
I_roi = rgb2gray(roi_frame);
I_roi = imcomplement(I_roi);
I_roi = wiener2(I_roi, [5 5]);
background = imopen(I_roi,strel('disk',3));
I2 = imadjust(I_roi - background);
K = imgaussfilt(I2, 5);
level = graythresh(K);
bw = im2bw(I2);
sigma = 3;
% Filter image with LoG
I = double(bw);
h = fspecial('log',sigma*30,sigma);
Ifilt = -imfilter(I,h);
% Threshold for points of interest
Ifilt(Ifilt < 0.001) = 0;
% Dilate to obtain local maxima
Idil = imdilate(Ifilt,strel('disk',50));
% This is the final image
P = (Ifilt == Idil) .* Ifilt;
Is there any way I can improve my current detection technique to make it work for images with a lot of background noise? Or are there techniques better suited for images like this?
The approach I would take:
-Average background subtraction
-Aggressive Gaussian smoothing (this filter should be shaped based on your target object, off the top of my head I think you want the sigma about half the smallest cross section of your object, but you may want to fiddle with this) Basically the goal is blurring the noise as much as possible without completely losing your target objects (based on shape and size)
-Edge detection. Try to be specific to the object if possible (basically, look at what the object's edge looks like after Gaussian smoothing and set your edge detection to look for that width and contrast shift)
-May consider running a closing operation here.
-Search the whole image for islands (fully enclosed regions) filter based on size and then on shape.
I am taking a hunch that despite the incredibly low signal to noise ratio, your granularity of noise is hopefully significantly smaller than your object size. (if your noise is both equivalent contrast and same ballpark size as your object... you are sunk and need to re-evaluate your acquisition imo)
Another note based on your speed needs. Extreme amounts of processing savings can be made through knowing last known positions and searching locally and also knowing where new targets can enter the image from.

comparing images pixel by pixel

I am working on some images. I am given an abc.tif image ( color image) . I read it as follows:
Mat test_image=imread("abc.tif",IMREAD_UNCHANGED);
I perform some operations on it and convert it into some binary image (using threshold) containing only two values 0 and 255 which are stored in img image where img is created as following:
Mat img(584,565,CV_8UC1); %//(so now img contains only 0 and 255)
I save this image using imwrite("myimage.jpg",img);
I want to compare the myimage.jpg image with another binary image manual.gif pixel by pixel to check whether one image is duplicate of another but as you can notice the problem is OpenCv doesnot support .gif format so I need to convert it into .jpg and because of that the image changes and now both the images will be concluded as different images may be even though they are same. What to do now?
Actually I am working on retinal blood vessel segmentation and these images are found in the DRIVE database.
I am given these images. Original image:
I perform some operations on it and extract blood vessels from it and then create a binary image and store in some Mat variable img as discussed earlier. Now I have got another image (.gif image) which I cannot load as shown below:
Now I want to compare my img image (binary) with the given .gif image (above) which I cannot load.
Use ImageMagic for converting your .gif to .PNG in batch mode. You could also convert it on the fly using system("convert img.gif img.png") call.
I'm not sure, if pixel comparison will give you good result. An offset shift of the same image will result in bad match.
EDIT As an idea. Maybe calculating centers of gravity and shifting/rotating both images to have the same origin may help here.
Consider using moments, freeman chain or other mode robust shape comparison methods.
first off you will want to use the images in the same format as each other #Adi mentioned jpg is lossy in the comments which is correct so shouldn't be used until possibly after any work is done. MATLAB - image conversion
you will also want the images to be of the same size. you can compare them using the size function and then pad them to add pixels to make the dimensions the same. the padding can always be removed later, just watch how the padding is added so as not to affect your operations.
you will also need to look into rotations, consider putting the image into the frequency domain and rotate the image to align the spectrum's.
below is a simple pixel comparison code, pixel comparison is not particularly accurate for comparing. even the slightest miss alignment will cause false negatives or false positives.
%read image
test_image1 = imread('C:\Users\Public\Pictures\Sample Pictures\Desert.jpg');
test_image2 = imread('C:\Users\Public\Pictures\Sample Pictures\Hydrangeas.jpg');
%convert to gray scale
gray_img1 = rgb2gray(test_image1);
gray_img2 = rgb2gray(test_image2);
% threshold image to put all values greater than 125 to 255 and all values
% below 125 to 0
binary_image1 = gray_img1 > 125;
binary_image2 = gray_img2 > 125;
%binary image to size to allow pixel by pixel checking
[row, col] = size(binary_image1);
% initialize the counters for similar and different pixelse to zero
similar = 0;
different = 0;
%two loops to scan through all rows and columns of the image.
for kk = 1 : row
for yy = 1 : col
%using if statement with isequal function to compare corresponding
%pixel values and count them depending ont he logical output of
%isequal
if isequal(binary_image1(kk,yy), binary_image2(kk,yy))
similar = similar + 1;
else
different = different + 1;
end
end
end
% calculate the percentage difference between the images and print it
total_pixels = row*col;
difference_percentage = (different / total_pixels) * 100;
fprintf('%f%% difference between the compared images \n%d pixels being different to %d total pixels\n', difference_percentage, different, total_pixels )
% simple supbtraction of the two images
diff_image = binary_image1 - binary_image2;
%generate figure to show the original gray and corresponding binary images
%as well as the subtraction
figure
subplot(2,3,1)
imshow(gray_img1);
title('gray img1');
subplot(2,3,2)
imshow(gray_img2);
title('gray img2');
subplot(2,3,4)
imshow(binary_image1);
title('binary image1');
subplot(2,3,5)
imshow(binary_image2);
title('binary image2');
subplot(2,3,[3,6])
imshow(diff_image);
title('diff image');

How to improve image quality in Matlab

I'm building an "Optical Character Recognition" system.
so far the system is capable to identify licence plates in good quality without any noise.
what I want in the next level is to be able to identify licence plates in poor quality beacuse of different reasons.
for example, let's look at the next plate:
as you see, the numbers are not look clearly, because of light returns or something else.
for my question: how can I improve the image quality, so when I move to binary image the numbers will not fade away?
thanks in advance.
We can try to correct for lighting effect by fitting a linear plane over the image intensities, which will approximate the average level across the image. By subtracting this shading plane from the original image, we can attempt to
normalize lighting conditions across the image.
For color RGB images, simply repeat the process on each channel separately,
or even apply it to a different colorspace (HSV, Lab*, etc...)
Here is a sample implementation:
function img = correctLighting(img, method)
if nargin<2, method='rgb'; end
switch lower(method)
case 'rgb'
%# process R,G,B channels separately
for i=1:size(img,3)
img(:,:,i) = LinearShading( img(:,:,i) );
end
case 'hsv'
%# process intensity component of HSV, then convert back to RGB
HSV = rgb2hsv(img);
HSV(:,:,3) = LinearShading( HSV(:,:,3) );
img = hsv2rgb(HSV);
case 'lab'
%# process luminosity layer of L*a*b*, then convert back to RGB
LAB = applycform(img, makecform('srgb2lab'));
LAB(:,:,1) = LinearShading( LAB(:,:,1) ./ 100 ) * 100;
img = applycform(LAB, makecform('lab2srgb'));
end
end
function I = LinearShading(I)
%# create X-/Y-coordinate values for each pixel
[h,w] = size(I);
[X Y] = meshgrid(1:w,1:h);
%# fit a linear plane over 3D points [X Y Z], Z is the pixel intensities
coeff = [X(:) Y(:) ones(w*h,1)] \ I(:);
%# compute shading plane
shading = coeff(1).*X + coeff(2).*Y + coeff(3);
%# subtract shading from image
I = I - shading;
%# normalize to the entire [0,1] range
I = ( I - min(I(:)) ) ./ range(I(:));
end
Now lets test it on the given image:
img = im2double( imread('http://i.stack.imgur.com/JmHKJ.jpg') );
subplot(411), imshow(img)
subplot(412), imshow( correctLighting(img,'rgb') )
subplot(413), imshow( correctLighting(img,'hsv') )
subplot(414), imshow( correctLighting(img,'lab') )
The difference is subtle, but it might improve the results of further image processing and OCR task.
EDIT: Here is some results I obtained by applying other contrast-enhancement techniques IMADJUST, HISTEQ, ADAPTHISTEQ on the different colorspaces in the same manner as above:
Remember you have to fine-tune any parameter to fit your image...
It looks like your question has been more or less answered already (see d00b's comment); however, here are a few basic image processing tips that might help you here.
First, you could try a simple imadjust. This simply maps the pixel intensities to a "better" value, which often increases the contrast (making it easier to view/read). I have had a lot of success with it in my work. It is easy to use too! I think its worth a shot.
Also, this looks promising if you simply want a higher resolution image.
Enjoy the "pleasure" of image-processing in MATLAB!
Good luck,
tylerthemiler
P.S. If you are flattening the image to binary tho, you are most likely ruining the image to start with, so don't do that if you can avoid it!
As you only want to find digits (of which there are only 10), you can use cross-correlation.
For this you would Fourier transform the picture of the plate. You also Fourier transform a pattern you want to match a good representation of a picture of the digit 1. Then you multiply in fourier space and inversely Fourier transform the result.
In the final cross-correlation, you will see pronounced peaks, where the pattern overlaps nicely with your image.
You do this 10 times and know where each digit is. Note that you must correct the tilt before you do the cross correlation.
This method has the advantage that you don't have to threshold your image.
There are certainly much more sophisticated algorithms in the literature for assigning number plates. One could for example use Bayes theory to estimate which digit would most likely occur (this helps a lot if you already have a databases of possible numbers).