Plot true color Sentinel-2A imagery in Matlab - matlab

Through a combination of non-matlab/non-native tools (GDAL) as well as native tools (geoimread) I can ingest Sentinel-2A data either a indiviual bands or as an RGB image having employed gdal merge. I'm stuck at a point where using
imshow(I, [])
Produces a black image, with apparently no signal. The range of intensity values in the image are 271 - 4349. I know that there is a good signal in the image because when I do:
bit_depth = 2^15;
I = swapbytes(I);
[I_indexed, color_map] = rgb2ind(I, bit_depth);
I_double = im2double(I_indexed, 'indexed');
ax1 = figure;
colormap(ax1, color_map);
image(I_double)
i.e. index the image, collect a colormap, set the colormap and then call the image function, I get a likeness of the region I'm exploring (albeit very strangely colored)
I'm currently considering whether I should try:
Find a low-level description of Sentinel-2A data, implement the scaling/correction
Use a toolbox, possibly this one.
Possibly adjust ouput settings in one of the earlier steps involving GDAL
Comments or suggestions are greatly appreciated.
A basic scaling scheme is:
% convert image to double
I_double = im2double(I);
% scaling
max_intensity = max(I_double(:));
min_intensity = min(I_double(:));
range_intensity = max_intensity - min_intensity;
I_scaled = 2^16.*((I_double - min_intensity) ./ range_intensity);
% display
imshow(uint16(I_scaled))
noting the importance of casting to uint16 from double for imshow.

A couple points...
You mention that I is an RGB image (i.e. N-by-M-by-3 data). If this is the case, the [] argument to imshow will have no effect. That only applies automatic scaling of the display for grayscale images.
Given the range of intensity values you list (271 to 4349), I'm guessing you are dealing with a uint16 data type. Since this data type has a maximum value of 65535, your image data only covers about the lower 16th of this range. This is why your image looks practically black. It also explains why you can see the signal with your given code: you apply swapbytes to I before displaying it with image, which in this case will shift values into the higher intensity ranges (e.g. swapbytes(uint16(4349)) gives a value of 64784).
In order to better visualize your data, you'll need to scale it. As a simple test, you'll probably be able to see something appear by just scaling it by 8 (to cover a little more than half of your dynamic range):
imshow(8.*I);

Related

Saving kinect depth frame (uint16) using MATLAB but why is it too dark?

Recently I work on kinect using MATLAB. I take depth frame which is in uint16 format. But when I display it or save it using MATLAB command like: imshow & imwrite respectively, it shows too dark image. But when set the display range or convert it in uint8 format it becomes brighter. But I want to save it as a brighter format without converting in uint8 format like scaling the range between 0 to 4500.
vid = videoinput('kinect',1);
vid2 = videoinput('kinect',2);
vid.FramesPerTrigger = 1;
vid2.FramesPerTrigger = 1;
% % Set the trigger repeat for both devices to 200, in order to acquire 201 frames from both the color sensor and the depth sensor.
vid.TriggerRepeat = 200;
vid2.TriggerRepeat = 200;
% % Configure the camera for manual triggering for both sensors.
triggerconfig([vid vid2],'manual');
% % Start both video objects.
start([vid vid2]);
trigger([vid vid2])
[imgDepth, ts_depth, metaData_Depth] = getdata(vid2);
f=imgDepth;
figure,imshow(f);
figure,imshow(f,[0 4500]);
imwrite(f,'C:\Users\sufi\Desktop\matlab_kinect\Data_image\output\depth\fo.tiff');
stop([vid vid2]);
When I set the display range:
Without setting the display range:
The values in a 16bit image range from 0 to 65535.
If we take a look at the histogram of your image:
We see that the max value is 7995. But that's just a few outliers. Most information is somewhere between 700 and 4300.
So all our values are in 5-10% of our value range. That makes it look very dark.
In order to make it look better for humans we have to normalize it. (Some image viewer do this automatically).
So in order to get a nicer image into your power point presentation you have two options.
a) display it in an image viewer that can display it nicely and take a screenshot
b) normalize the image in matlab and save it to a file.
You can further improve the image by removing those outliers befor normalization.
One simple way can be scaling the image based on following formula:
Pixel_value=Pixel_value/4500*65535
If you want see the exact image that you get from uint8 ; I guess the following steps will work for you.
Probably while casting the image to uint8 matlab firstly clip the values above some threshold lets say 4095=2**12-1 (i'm not sure about value) and then it makes right shifts (4 shifts in our case) to make it inside the range of 0-255.
So i guess multiplying the value of uint8 with 256 and casting it as uint16 will help you get the same image
Pixel_uint16_value= Pixel_uint8_value*256 //or Pixel_uint16_value= Pixel_uint8_value<<8
//dont forget to cast the result as uint16

Gaussian kernel isn't showing up [duplicate]

I have imported an image. I have parsed it to double precision and performed some filtering on it.
When I plot the result with imshow, the double image is too dark. But when I use imshowpair to plot the original and the final image, both images are correctly displayed.
I have tried to use uint8, im2uint8, multiply by 255 and then use those functions, but the only way to obtain the correct image is using imshowpair.
What can I do?
It sounds like a problem where the majority of your intensities / colour data are outside the dynamic range of what is accepted for imshow when showing double data.
I also see that you're using im2double, but im2double simply converts the image to double and if the image is already double, nothing happens. It's probably because of the way you are filtering the images. Are you doing some sort of edge detection? The reason why you're getting dark images is probably because the majority of your intensities are negative, or are hovering around 0. imshow whe displaying double type images assumes that the dynamic range of intensities is [0,1].
Therefore, one way to resolve your problem is to do:
imshow(im,[]);
This shifts the display so that range so the smallest value is mapped to 0, and the largest to 1.
If you'd like a more permanent solution, consider creating a new output variable that does this for you:
out = (im - min(im(:))) / (max(im(:)) - min(im(:)));
This will perform the same shifting that imshow does when displaying data for you. You can now just do:
imshow(out);

Automatic detection of B/W image against colored background. What should I do when local thresholding doesn't work?

I have an image which consists of a black and white document against a heterogeneous colored background. I need to automatically detect the document in my image.
I have tried Otsu's method and a Local Thresholding method, but neither were successful. Also, edge detectors like Canny and Sobel didn't work.
Can anyone suggest some method to automatically detect the document?
Here's an example starting image:
After using various threshold methods, I was able to get the following output:
What follows is an automated global method for isolating an area of low color saturation (e.g. a b/w page) against a colored background. This may work well as an alternative approach when other approaches based on adaptive thresholding of grayscale-converted images fail.
First, we load an RGB image I, covert from RGB to HSV, and isolate the saturation channel:
I = imread('/path/to/image.jpg');
Ihsv = rgb2hsv(I); % convert image to HSV
Isat = Ihsv(:,:,2); % keep only saturation channel
In general, a good first step when deciding how to proceed with any object detection task is to examine the distribution of pixel values. In this case, these values represent the color saturation levels at each point in our image:
% Visualize saturation value distribution
imhist(Isat); box off
From this histogram, we can see that there appear to be at least 3 distinct peaks. Given that our target is a black and white sheet of paper, we’re looking to isolate saturation values at the lower end of the spectrum. This means we want to find a threshold that separates the lower 1-2 peaks from the higher values.
One way to do this in an automated way is through Gaussian Mixture Modeling (GMM). GMM can be slow, but since you’re processing images offline I assume this is not an issue. We’ll use Matlab’s fitgmdist function here and attempt to fit 3 Gaussians to the saturation image:
% Find threshold for calling ROI using GMM
n_gauss = 3; % number of Gaussians to fit
gmm_opt = statset('MaxIter', 1e3); % max iterations to converge
gmmf = fitgmdist(Isat(:), n_gauss, 'Options', gmm_opt);
Next, we use the GMM fit to classify each pixel and visualize the results of our GMM classification:
% Classify pixels using GMM
gmm_class = cluster(gmmf, Isat(:));
% Plot histogram, colored by class
hold on
bin_edges = linspace(0,1,256);
for j=1:n_gauss, histogram(Isat(gmm_class==j), bin_edges); end
In this example, we can see that the GMM ended up grouping the 2 far left peaks together (blue class) and split the higher values into two classes (yellow and red). Note: your colors might be different, since GMM is sensitive to random initial conditions. For our use here, this is probably fine, but we can check that the blue class does in fact capture the object we’d like to isolate by visualizing the image, with pixels colored by class:
% Visualize classes as image
im_class = reshape(gmm_class ,size(Isat));
imagesc(im_class); axis image off
So it seems like our GMM segmentation on saturation values gets us in the right ballpark - grouping the document pixels (blue) together. But notice that we still have two problems to fix. First, the big bar across the bottom is also included in the same class with the document. Second, the text printed on the page is not being included in the document class. But don't worry, we can fix these problems by applying some filters on the GMM-grouped image.
First, we’ll isolate the class we want, then do some morphological operations to low-pass filter and fill gaps in the objects.
Isat_bw = im_class == find(gmmf.mu == min(gmmf.mu)); %isolate desired class
opened = imopen(Isat_bw, strel('disk',3)); % morph open
closed = imclose(Isat_bw, strel('disk',50)); % morph close
imshow(closed)
Next, we’ll use a size filter to isolate the document ROI from the big object at the bottom. I’ll assume that your document will never fill the entire width of the image and that any solid objects bigger than the sheet of paper are not wanted. We can use the regionprops function to give us statistics about the objects we detect and, in this case, we’ll just return the objects’ major axis length and corresponding pixels:
% Size filtering
props = regionprops(closed,'PixelIdxList','MajorAxisLength');
[~,ridx] = min([props.MajorAxisLength]);
output_im = zeros(numel(closed),1);
output_im(props(ridx).PixelIdxList) = 1;
output_im = reshape(output_im, size(closed));
% Display final mask
imshow(output_im)
Finally, we are left with output_im - a binary mask for a single solid object corresponding to the document. If this particular size filtering rule doesn’t work well on your other images, it should be possible to find a set of values for other features reported by regionprops (e.g. total area, minor axis length, etc.) that give reliable results.
A side-by-side comparison of the original and the final masked image shows that this approach produces pretty good results for your sample image, but some of the parameters (like the size exclusion rules) may need to be tuned if results for other images aren't quite as nice.
% Display final image
image([I I.*uint8(output_im)]); axis image; axis off
One final note: be aware that the GMM algorithm is sensitive to random initial conditions, and therefore might randomly fail, or produce undesirable results. Because of this, it's important to have some kind of quality control measures in place to ensure that these random failures are detected. One possibility is to use the posterior probabilities of the GMM model to form some kind of criteria for rejecting a certain fit, but that’s beyond the scope of this answer.

Histogram normalization for normalizing background changes

I recorded a sequence of depth images using Kinect v2. But the background brightness is not constant. But It keeps changing from dark to light and light to dark (i.e) .
So I was thinking to use Histogram normalization of each image in a sequence to normalise the background to the same level. Can anyone please tell me how I can do this?
Matlab has a function for histogram matching and their site has some great examples too
Just use any frame as the reference (I suggest using the first one, but there is no real reason to do so), and keep it for all the remaining frames. If you want to decrease processing time you can also try lowering the number of bins. For a uint8 image there are usually 256 bins, but as you'll see in the link reducing it still produces favorable results
I don't know if kinect images are rgb or grayscale, for this example Im assuming they are grayscale
kinect_images = Depth;
num_frames = size(kinect_images,3); %maybe 4, I don't know if kinect images
%are grayscale(3) or RGB(4)
num_of_bins = 32;
%imhistmatch is a recent addition to matlab, use this variable to
%indicate whether or not you have it
I_have_imhistmatch = true;
%output variable
equalized_images = cast(zeros(size(kinect_images)),class(kinect_images));
%stores first frame as reference
ref_image = kinect_images(:,:,1); %if rgb you may need (:,:,:,1)
ref_hist = imhist(ref_image);
%goes through every frame and matches the histof
for ii=1:1:num_frames
if (I_have_imhistmatch)
%use this with newer versions of matlab
equalized_images(:,:,ii) = imhistmatch(kinect_images(:,:,ii), ref_image, num_of_bins);
else
%use this line with older versions that dont have imhistmatch
equalized_images(:,:,ii) = histeq(kinect_images(:,:,ii), ref_hist);
end
end
implay(equalized_images)

Matlab Grayscale Normalization

I am new to matlab and to image processing, and I am having some issues normalizing but I am not sure why.
In my code I store the image as a black and white image in lim3, then:
minvalue = min(min(min(lim3)));
maxvalue = max(max(max(lim3)));
normimg = (lim3-minvalue)*255/(maxvalue-minvalue);
Unfortunately, this gives a new image that is exactly the same as lim3 at all, but I am not sure why. Ideally, I don't want to use the histeq function, so if someone could explain how to fix this code to get it to work, I would appreciate it.
All of the people above in the comments have raised very good points, but if you want a tl;dr answer, here are the most important points:
If your minimum and maximum values are 0 and 255 respectively for all colour channels, this means that the minimum and maximum colour values are black and white respectively. This is the same situation if your image was a single channel image / grayscale. As such, if you try and normalize your output image, it will look the same as you would be multiplying and dividing by the same scale. However, just as a minor note, your above code will work if your image is grayscale. I would also get rid of the superfluous nested min/max calls.
You need to make sure that your image is cast to double before you do this scaling as you will most likely generate floating point numbers. Should your scale be < 1, this will inadvertently be truncated to 0. In general, you will lose precision when you're trying to normalize the intensities as the type of the image is most likely uint8. You also need to remember to cast back to uint8 when you're done, as that is what the original type of the image was before you cast. You can do this casting, or you can use im2double as this essentially does what you want under the hood, but normalizes the image's intensities to the range of [0,1].
As such, if you really really really really... really... want to use your code above, you'd have to do something like this:
lim3 = double(lim3); %// Cast to double
minvalue = min(lim3(:)); %// Note the change here
maxvalue = max(lim3(:)); %// Got rid of superfluous nested min/max calls
normimg = uint8((lim3-minvalue)*255/(maxvalue-minvalue)); %// Cast back to uint8
This code will work if the image you are reading in is grayscale.
Bonus - For Colour Images
However, if you want to apply the above for colour images, I don't recommend you use the above approach. The reason being is you will only see a difference if the minimum and maximum values for each colour plane are the same - 0 and 255 respectively. What I would recommend you do is normalize each colour plane separately so that you'll push each colour plane to the range of [0,1], not being bound to the minimum and maximum of just one colour plane.
As such, I would recommend you do something like this:
lim3 = double(lim3); %// Cast to double
normimg = uint8(zeros(size(lim3))); %// Allocate output image
for idx = 1 : 3
chan = lim3(:,:,idx);
minvalue = min(chan(:));
maxvalue = max(chan(:));
normimg(:,:,idx) = uint8((chan-minvalue)*255/(maxvalue-minvalue)); %// Cast back to uint8
end
The above code accesses each colour plane individually, normalizes the plane, then puts the result in the output image normimg. I would recommend you use the above approach instead if you want to see any contrast differences for colour images.