Manually turn RGB image into Grayscale Matlab - matlab

Let me preface this by saying, I understand there are functions that would do this for me, but I wanted to do it manually so I could understand what exactly is happening here.
So my goal is to read in a RGB image and turn it into a grayscale. All of my image processing work up to this point has been solely grayscale based, so I am a little bit lost.
I have tried to first read in the image by doing
fid = fopen('color.raw');
myimage = imread(fid, [512 384], 'uint8');
fclose(fid);
but myimage ends up as an empty 0 x 0 matrix. I think I need to assign an "R" "G" and "B" value to each pixel, thus giving each pixel three values for the three colors, but I am not sure if thats correct, and even how to attempt that.
my question is the following: How would I read in a RGB image and then turn into a grayscale one.
EDIT: So I understand how I would go about turning the RGB into the grayscale after getting the R G and B values, but I cannot seem to make Matlab read in the image, can anyone offer any assistance? using imread seems to make the most sense, but
[pathname] = ...
uigetfile({'*.raw';'*.mdl';'*.mat';'*.*'},'File Selector');
fid = fopen(pathname);
myimage = imread(fid);
fclose(fid);
Is not working, im getting an error of invalid filename for fopen, and I really do not understand why.

Conceptually you just average the red, green, and blue components and then providing that average as the output for the red, green, and blue components. You can do this using a simple arithmetic mean
(r+g+b)/3.
Or for kicks, you can use a geometric mean to get a slightly different contrast..
(r*g*b)^(1/3).

imread - read image from graphics file (it's in the documentation)
http://www.mathworks.com/help/matlab/ref/imread.html
For RGB to gray scale use the luminosity method. http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
The luminosity method is a more sophisticated version of the average
method. It also averages the values, but it forms a weighted average
to account for human perception. We’re more sensitive to green than
other colors, so green is weighted most heavily. The formula for
luminosity is 0.21 R + 0.71 G + 0.07 B.

The problem with averaging all 3 values as other users have indicated is that that is not how our eyes work at all.
When you average your images, you get the result below (on the built in peppers.png image)
You are effectively doing grayImg=.33*R+.33*G+.33*B when you find the average, now compare that to how MATLAB calculates grayscale values (with consideration into how humans view images)
0.2989 * R + 0.5870 * G + 0.1140 * B
See the stark contrast in values?
To get better looking data, stay closer to the coefficients MATLAB uses :)
The way MATLAB renders it is like this:

Related

Plot true color Sentinel-2A imagery in Matlab

Through a combination of non-matlab/non-native tools (GDAL) as well as native tools (geoimread) I can ingest Sentinel-2A data either a indiviual bands or as an RGB image having employed gdal merge. I'm stuck at a point where using
imshow(I, [])
Produces a black image, with apparently no signal. The range of intensity values in the image are 271 - 4349. I know that there is a good signal in the image because when I do:
bit_depth = 2^15;
I = swapbytes(I);
[I_indexed, color_map] = rgb2ind(I, bit_depth);
I_double = im2double(I_indexed, 'indexed');
ax1 = figure;
colormap(ax1, color_map);
image(I_double)
i.e. index the image, collect a colormap, set the colormap and then call the image function, I get a likeness of the region I'm exploring (albeit very strangely colored)
I'm currently considering whether I should try:
Find a low-level description of Sentinel-2A data, implement the scaling/correction
Use a toolbox, possibly this one.
Possibly adjust ouput settings in one of the earlier steps involving GDAL
Comments or suggestions are greatly appreciated.
A basic scaling scheme is:
% convert image to double
I_double = im2double(I);
% scaling
max_intensity = max(I_double(:));
min_intensity = min(I_double(:));
range_intensity = max_intensity - min_intensity;
I_scaled = 2^16.*((I_double - min_intensity) ./ range_intensity);
% display
imshow(uint16(I_scaled))
noting the importance of casting to uint16 from double for imshow.
A couple points...
You mention that I is an RGB image (i.e. N-by-M-by-3 data). If this is the case, the [] argument to imshow will have no effect. That only applies automatic scaling of the display for grayscale images.
Given the range of intensity values you list (271 to 4349), I'm guessing you are dealing with a uint16 data type. Since this data type has a maximum value of 65535, your image data only covers about the lower 16th of this range. This is why your image looks practically black. It also explains why you can see the signal with your given code: you apply swapbytes to I before displaying it with image, which in this case will shift values into the higher intensity ranges (e.g. swapbytes(uint16(4349)) gives a value of 64784).
In order to better visualize your data, you'll need to scale it. As a simple test, you'll probably be able to see something appear by just scaling it by 8 (to cover a little more than half of your dynamic range):
imshow(8.*I);

Scramble the pixels of black and white images in Matlab

I have a series of black and white images (not greyscale, black and white; 2D matrices in Matlab), and I need to randomly scramble the pixels. I found this package in Mathworks File Exchange (https://it.mathworks.com/matlabcentral/fileexchange/66472-image-shuffle); one of the functions, imScrambleRand, does exactly what I need, but it works for RGB images (3D matrices). Is there a way to transform b&w images into 3D matrices so that I can use that function? Or can anyone suggest any other script that does what I need? Keep in mind that I'm not familiar with Matlab, but I'll do my best.
Thank you.
EDIT 1: When I import the BW image I get a 2D matrix of logic values (0 = black, 1 = white). I think the different data format (logic vs integer) is what yields errors when using the function for RGB images.
EDIT 2: I adapted the demo code from the aforementioned package and I used the suggestion by #Jonathan for transforming a 2D matrix into a 3D matrix, and added a loop to transform the logic values into RGB integer values, then use the imScrambleRand function. It works, but what I obtain is the following image: SCRAMBLED IMAGE. This is the BW picture I start with: BW IMAGE. So I checked the scrambled image, and the function from the FEX file actually scrambles within the RGB values, meaning that I found, for instance, a pixel with RGB 0,255,0. So I solved a problem but actually there's a problem within the function: it doesn't scramble pixels, it scrambles values generating colors that were not in the original picture.
EDIT 3: I used the code provided by #nhowe and I obtain exactly what I need, thanks!
EDIT 4: Ok, turns out it's not ok to scramble the pixels since it makes the image too scattered and different from the starting image (you don't say?), but I need to scramble BLOCKS OF PIXELS so that you can't really recognize the image but the black pixels are not too scattered. Is there a way to do that using the code provided by #nhowe?
EDIT 5: It should be ok with this function: https://it.mathworks.com/matlabcentral/fileexchange/56160-hio-been-hb-imagescramble
A simple way to scramble matrix M:
r = rand(size(M));
[~,ri] = sort(r(:));
M(ri) = M;
The simplest solution to go from grayscale to RGB might be this:
rgbImage = cat(3, grayImage, grayImage, grayImage);
Then apply your function from FEX and extract one color channel, assuming that the FEX function will yield three identical color channels.

Grayscale image and L*a*b space in MATLAB

I have a bunch of images, the vast majority of which are color (rgb) images. I need to apply some spatial features in the three different channels of their Lab color space. The conversion from RGB Color space to Lab color space is straightforward through rgb2gray. However, this naturally fails when the image is grayscale (consists of one channel only, with the numerical representation being a double, uint8, anything really).
I am familiar with the fact that the "luminance" (L) channel of the Lab color space is essentially the grayscaled original RGB image. This question, however, is of a different nature; what I'm asking is: Given an image that is already grayscale, I trivially get the L channel in Lab color space. What should the a and b channels be? Should they be zero? The following example, using the pre-build "peppers" image, shows the visual effect of doing so:
I = imread('peppers.png');
figure; imshow(I, []);
Lab = rgb2gray(I);
Lab(:, :, 2) = 0;
Lab(:, :, 3) = 0;
figure; imshow(Lab, []);
If you run this code, you will note that the second imshow outputs a reddish version of the first image, resembling an old dark room. I admit to not being knowledgeable about what the a and b color channels represent in order to understand how I should deal with them in grayscale images, and was looking for some assistance.
An XY Type of Question.
IN CIELAB, a* and b* at 0 means no chroma: i.e. it's greyscale.
BUT WAIT THERE'S MORE! If you're having problems it's because that's not the actual question you need answered:
Incorrect Assumptions
First off, no, The L* of L*a*b* is NOT Luminance, and it is not the same as luminance.
Luminance (L or Y) is a linear measure of light.
L* (*Lstar) is perceptual lightness, it follows human perception (more or less, depending on a bazillion contextual things).
rgb2grey
rgb2grey does not convert to L*a*b*
Also, unfortunately some of the MATLAB functions for colorspaces have errors in implementation.
If you want to convert an sRGB color/image/pixel to LAB, then you need to follow this flowchart:
sRGB -> linear RGB -> CIEXYZ -> CIELAB
If you only want the greyscale or lightness information, with no color you can:
sRGB -> linear RGB -> CIE Y (spectral weighting) -> L* (perceptual lightness)
I discuss this simple method of sRGB to L* in this post
And if you want to see more in depth details, I suggest Bruce Lindbloom.

Image similarity by Euclidean distance in hsv color space in MATLAB

The code included below calculates the Euclidean distance between two images in hsv color space and if the result is under a Threshold (here set to 0.5) the two images are similar and it will group them in one cluster.
This will be done for a group of images (video frames actually).
It worked well on a group of sample images but when I change the sample it starts to work odd, e.g the result is low for two different images and high (like 1.2) for two similar images.
For example the result for these two very similar images is relatively high: first pic and second pic when it actually should be under 0.5.
What is wrong?
In the code below, f is divided by 100 to allow comparison to values near 0.5.
Im1 = imread('1.jpeg');
Im2 = imread('2.jpeg');
hsv = rgb2hsv(Im1);
hn1 = hsv(:,:,1);
hn1=hist(hn1,16);
hn1=norm(hn1);
hsv = rgb2hsv(Im2);
hn2 = hsv(:,:,1);
hn2=hist(hn2,16);
hn2=norm(hn2);
f = norm(hn1-hn2,1)
f=f/100
These two lines:
hn1=hist(hn1,16);
hn1=norm(hn1);
convert your 2D image into a scalar. I suspect that is not what you're interested in doing.....
EDIT:
Probably a better approach would be:
hn1 = hist(hn1(:),16) / numel(hn1(:));
but, you haven't really given us much on the math, so this is just a guess.

How to improve image quality in Matlab

I'm building an "Optical Character Recognition" system.
so far the system is capable to identify licence plates in good quality without any noise.
what I want in the next level is to be able to identify licence plates in poor quality beacuse of different reasons.
for example, let's look at the next plate:
as you see, the numbers are not look clearly, because of light returns or something else.
for my question: how can I improve the image quality, so when I move to binary image the numbers will not fade away?
thanks in advance.
We can try to correct for lighting effect by fitting a linear plane over the image intensities, which will approximate the average level across the image. By subtracting this shading plane from the original image, we can attempt to
normalize lighting conditions across the image.
For color RGB images, simply repeat the process on each channel separately,
or even apply it to a different colorspace (HSV, Lab*, etc...)
Here is a sample implementation:
function img = correctLighting(img, method)
if nargin<2, method='rgb'; end
switch lower(method)
case 'rgb'
%# process R,G,B channels separately
for i=1:size(img,3)
img(:,:,i) = LinearShading( img(:,:,i) );
end
case 'hsv'
%# process intensity component of HSV, then convert back to RGB
HSV = rgb2hsv(img);
HSV(:,:,3) = LinearShading( HSV(:,:,3) );
img = hsv2rgb(HSV);
case 'lab'
%# process luminosity layer of L*a*b*, then convert back to RGB
LAB = applycform(img, makecform('srgb2lab'));
LAB(:,:,1) = LinearShading( LAB(:,:,1) ./ 100 ) * 100;
img = applycform(LAB, makecform('lab2srgb'));
end
end
function I = LinearShading(I)
%# create X-/Y-coordinate values for each pixel
[h,w] = size(I);
[X Y] = meshgrid(1:w,1:h);
%# fit a linear plane over 3D points [X Y Z], Z is the pixel intensities
coeff = [X(:) Y(:) ones(w*h,1)] \ I(:);
%# compute shading plane
shading = coeff(1).*X + coeff(2).*Y + coeff(3);
%# subtract shading from image
I = I - shading;
%# normalize to the entire [0,1] range
I = ( I - min(I(:)) ) ./ range(I(:));
end
Now lets test it on the given image:
img = im2double( imread('http://i.stack.imgur.com/JmHKJ.jpg') );
subplot(411), imshow(img)
subplot(412), imshow( correctLighting(img,'rgb') )
subplot(413), imshow( correctLighting(img,'hsv') )
subplot(414), imshow( correctLighting(img,'lab') )
The difference is subtle, but it might improve the results of further image processing and OCR task.
EDIT: Here is some results I obtained by applying other contrast-enhancement techniques IMADJUST, HISTEQ, ADAPTHISTEQ on the different colorspaces in the same manner as above:
Remember you have to fine-tune any parameter to fit your image...
It looks like your question has been more or less answered already (see d00b's comment); however, here are a few basic image processing tips that might help you here.
First, you could try a simple imadjust. This simply maps the pixel intensities to a "better" value, which often increases the contrast (making it easier to view/read). I have had a lot of success with it in my work. It is easy to use too! I think its worth a shot.
Also, this looks promising if you simply want a higher resolution image.
Enjoy the "pleasure" of image-processing in MATLAB!
Good luck,
tylerthemiler
P.S. If you are flattening the image to binary tho, you are most likely ruining the image to start with, so don't do that if you can avoid it!
As you only want to find digits (of which there are only 10), you can use cross-correlation.
For this you would Fourier transform the picture of the plate. You also Fourier transform a pattern you want to match a good representation of a picture of the digit 1. Then you multiply in fourier space and inversely Fourier transform the result.
In the final cross-correlation, you will see pronounced peaks, where the pattern overlaps nicely with your image.
You do this 10 times and know where each digit is. Note that you must correct the tilt before you do the cross correlation.
This method has the advantage that you don't have to threshold your image.
There are certainly much more sophisticated algorithms in the literature for assigning number plates. One could for example use Bayes theory to estimate which digit would most likely occur (this helps a lot if you already have a databases of possible numbers).