I have an image with 4476x9058 pixels. I'm trying to use mat2cell to split it into subimages with 100x300 pixels each. However, I'm getting the error:
Input arguments, D1 through D2, must sum to
each dimension of the input matrix size,
[4476 9058].
The code is shown below:
image =rand(4476,9058);
blockSizeRow = 100;
blockSizeCol = 300;
[nrows, ncols] = size(image);
nBlocksRow = floor(nrows / blockSizeRow);
nBlocksCol = floor(ncols / blockSizeCol);
rowDist = [blockSizeRow * ones(1, nBlocksRow), mod(nrows, nBlocksRow)];
colDist = [blockSizeCol * ones(1, nBlocksCol), mod(ncols, nBlocksCol)];
blockImages = mat2cell(image, rowDist, colDist,1);
change mod(nrows, nBlocksRow) to mod(nrows, blockSizeRow), and mod(ncols, nBlocksCol) to mod(ncols, blockSizeCol)
Related
I am reading this paper for enhancing image quality.
my problem is that when I am calculating weighted CDF, I always get 1 as output.
here is the sequence of formulas:
which gamma is defined like this:
which Cw is the weighted CDF:
which I think my main problem would be here.
but to be more clear I will add the rest of the formulas too.
corresponding Weighted Histogram Distribution function is:
and Alpha=C(i) which:
which h_c(i) is clipped histogram , and M is the total intensity levels ( which I'm not sure what is the M, I assumed since sum of Pi should be 1 so it must be sum of the hc_i )
and this is how it is clipping the histogram :
and the clipping limit is calculated with this formula:
here is my code:
sample_img = imread('my image path');
sample_img = im2double(sample_img);
L = 256 % The magnitude of each and every color channel is confined within the range [0 , L-1]
redChannel = sample_img(:,:,1);
greenChannel = sample_img(:,:,2);
blueChannel = sample_img(:,:,3);
max_blue = max(max(blueChannel));
max_green = max(max(greenChannel));
max_red = max(max(redChannel));
min_blue = min(min(blueChannel));
min_green = min(min(greenChannel));
min_red = min(min(redChannel));
bn = blueChannel - min_blue;
rn = redChannel - min_red;
gn = greenChannel - min_green;
max_bn = max(max(bn));
max_rn = max(max(rn));
max_gn = max(max(gn));
b_stretched = bn/max_bn;
r_stretched = rn/max_rn;
g_stretched = gn/max_gn;
% Recombine separate color channels into an RGB image.
rgb_stretched_Image = cat(3, r_stretched, g_stretched, b_stretched);
% Convert RGB to HSI
hsi_image = rgb2hsi(rgb_stretched_Image);
intensity = hsi_image(:, :, 3);
figure()
[counts , binLocations] = imhist(intensity);
imhist(intensity);
hist = counts;
% the clipping limit is computed based on the mean value of the
Tc = mean(hist);
% histogram clipping
length_hist = length(hist);
clipped_hist = zeros(1,length_hist);
for hist_id = 1:length_hist
if hist(hist_id)<Tc
disp('<Tc')
disp(hist(hist_id));
clipped_hist(hist_id) = hist(hist_id);
continue
end
disp('>Tc')
clipped_hist(hist_id) = Tc;
end
% the corresponding PDF (p(i))
% this is where I just used sum(clipped_hist) instead of M
Pi = clipped_hist / sum(clipped_hist);
%CDF
Ci = sum(Pi(1:L));
Alpha = Ci;
%Weighted Histogram Distribution function
Pmin = min(Pi);
Pmax = max(Pi);
Pwi = Pmax * power((Pi-Pmin)/(Pmax-Pmin),Alpha);
%weighted PDF sum
intensity_max = max(max(intensity*L));
Sum_Pwi = sum(Pwi(1:intensity_max));
% weighted CDF
Cwi = sum(Pwi(1:intensity_max)/Sum_Pwi);
%gamma
gamma = 1 - Cwi;
%Transformed pixel intensity
tpi = round(power(intensity/intensity_max,gamma));
since as I said the CDF output is always 1 , the enhanced image is always a white image. and as I see this formula, it must always have 1 as it's output. Am I missing something here ?
and am I right with the M value ?
I need to take histogram of each splited image and i want to calculate mean and variance of the splited image. here i am getting an error while i calculating the mean value..... please guide me
[h w c] = size(x);
numSplits = 3; %
sw = floor(w/numSplits); %
widths = repmat(sw, 1, numSplits-1);
widths(numSplits) = w - sum(widths);
splits = mat2cell(x, h, widths, c);
% show the splits
for ii=1:numSplits
subplot(1,numSplits,ii);
imshow(splits{ii});
g(ii)=(splits{ii});
figure, imhist(g(ii));
end
%mean
im1=g(ii);
su=mean2(im1);
mean=ceil(su);
disp('mean Value');
disp(mean)
%variance
sv=double(im1);
v = var(sv);
disp(v)
i need to get the histograms of each seperate images and i need to calculate the mean for that splitted images
I assume that x is the image you want to split and analyze and that it is the image you have linked:
First, I load the image (you have already done this, I guess, so you do not need to copy this):
x = imread('https://i.stack.imgur.com/4PAaI.png');
The following code solves your coding errors:
[h, w, c] = size(x);
numSplits = 3; %
sw = floor(w/numSplits); %
widths = repmat(sw, 1, numSplits-1);
widths(numSplits) = w - sum(widths);
splits = mat2cell(x, h, widths, c);
results = repmat(struct('mean',[], 'variance',[]),numSplits, 1);
% show and analyze the splits
for ii=1:numSplits
subplot(2,numSplits,ii);
imshow(splits{ii});
subplot(2,numSplits, ii+numSplits);
imhist(splits{ii});
results(ii).mean = mean2(splits{ii});
results(ii).variance = (std2(splits{ii})).^2;
end
The mean and variance are stored in results:
>> results(1)
ans =
struct with fields:
mean: 118.0233
variance: 1.3693e+03
>> results(2)
ans =
struct with fields:
mean: 126.1719
variance: 1.9608e+03
>> results(3)
ans =
struct with fields:
mean: 121.9004
variance: 958.3740
However, please double check that you really want to compute the stats for the combined color channels of the images and not only for, for eample, the red one.
I have the following code for which instead of loading one image at a time, I'd like to run through every image in a folder (the defective folder in this code). I'd like the output to be an array containing the values of 'G' for each of the input images. I'm not too sure how to go about this - so any points appreciated. Many thanks!
%PCA code,
img = imread('C:\users\m7-miller\desktop\250images\defective\inkblob01.png');
img_gray = rgb2gray(img);
img_gray_double = im2double(img_gray);
figure,
set(gcf,'numbertitle','off','name','Grayscale Image'),
imshow(img_gray_double)
%find mean of the image
img_mean = mean(img_gray_double);
[m n] = size(img_gray);
%Make column vector of mean image value
new_mean = repmat(img_mean,m,1);
%Mean corrected image
Corrected_img = img_gray_double - new_mean;
%Covariance matrix of corrected image
cov_img = cov(Corrected_img);
%Eigenvalues of covariance matrix - columns of V are e-vectors,
%diagonals of D e-values
[V, D] = eig(cov_img);
V_T = transpose(V);
Corrected_image_T = transpose(Corrected_img);
FinalData = V_T * Corrected_image_T;
% Image approximation by choosing only a selection of principal components
PCs = 3;
PCs = n - PCs;
Reduced_V = V;
for i = 1:PCs,
Reduced_V(:,1) =[];
end
Y=Reduced_V'* Corrected_image_T;
Compressed_img = Reduced_V*Y;
Compressed_img = Compressed_img' + new_mean;
figure,
set(gcf,'numbertitle','off','name','Compressed Image'),
imshow(Compressed_img)
% End of image compression
% Difference of original image and compressed
S = (img_gray_double - Compressed_img);
figure,
set(gcf,'numbertitle','off','name','Difference'),
imshow(S)
% Sum of the differences
F = sum(S);
G = sum(F)
Are you looking for the dir command?
files = dir('*.png');
for n=1:size(files,1)
filename = files(n).name;
img = imread(filename);
....
G = sum(F);
end
I am estimating ridge orientation of an fingerprint image by dividing it into blocks of 41*41..image is of size 240*320..here is my code and the problem is that I am getting output image size different than input image.
% matalb code for orientation
im =imread('D:\project\116_2_5.jpg');
im = double(im);
[m,n] = size(im);
% to normalise image
nor = im - mean(im(:));
im = nor/std(nor(:));
w = 41;
% To calculate x and y gradient component using 3*3 sobel mask
[delx,dely] = gradient(im);
% Ridge orientation
for i=21:w:240-41
for j=21:w:320-41
A = delx(i-20:i+20,j-20:j+20);
B = dely(i-20:i+20,j-20:j+20);
Gxy = sum(sum(A.*B));
Gxx = sum(sum(A.*A));
Gyy = sum(sum(B.*B));
diff = Gxx-Gyy;
theta(i-20:i+20,j-20:j+20) = (pi/2) + 0.5*atan2(2*Gxy,diff);
end;
end;
but in this process i am loosing the pixels at the boundries so as to avoid the "index exceed" error i.e size of theta is m = 240-41 = 199 and n = 320-41=279..Thus my input image size is 240*320 and output image size is size 199*279..How can i get output image same as size of input image.
one more thing that i dnt have to use "blockproc" function...Thanks in advance
You can use padarray to add zeros onto your matrix:
A1 = padarray(A,[7 8],'post'); % 240+7=41*7, 320+8=41*8
B1 = padarray(B,[7 8],'post');
then generate Gxx, Gyy, and Gxy with A1 and B1.
Method 2:
Besides, I tried to simplify your code a little bit by removing the loops, for your reference:
% Ridge orientation
Gxy = delx .* dely;
Gxx = delx .* delx;
Gyy = dely .* dely;
fun = #(x) sum(x(:))*ones(size(x));
theta_Gxy = blockproc(Gxy,[41 41],fun, 'PadPartialBlocks', true);
theta_diff = blockproc(Gxx-Gyy,[41 41],fun, 'PadPartialBlocks', true);
theta0 = pi/2 + 0.5 * atan2(2 * theta_Gxy, theta_diff);
theta = theta0(1:240, 1:320);
You may check blockproc for more details.
I have implemented a Gabor filter but don't know how to convolve it with the input image so as to get the desired result.My input image is of size 240*320 and i am dividing it into a block of 17*17..so there are 14*18 =255 blocks in total..I am also not sure what should be the size of Gabor filter and should i need to convolve each filter (14*18 = 252 filters)to each block or whole image..
here is my code..
% compute Gabor filter
sigma_x = 4;
sigma_y = 4;
w = 17; % size of block
wg = 5; % size of filter
Gabor = zeros(m1,n1);
for i = 9:w:240-w
for j3 = 9:w:320-w
pim = im(i-ww:i+ww,j-ww:j+ww); % block of input image
f = F(i,j); %each block contains single value
of freuency and theta
O = theta(i,j);
[x,y] = meshgrid(-floor(wg/2):floor(wg/2));
x_phi = x.*cos(O) + y.*sin(O);
y_phi = -x.*sin(O) + y.*cos(O);
x_val = ((x_phi).^2)./(sigma_x.^2);
y_val = ((y_phi).^2)./(sigma_y.^2);
h = exp(-0.5*(x_val+y_val)).*cos((2*pi*f).*(x_phi));
Gabor(i-ww:i+ww,j-ww:j+ww) = conv2(pim,h,'same');
end;
end;
output image is of fingerprint and should not consisit of white dots on black fingerprint line.