I recently implemented a box average filter in MATLAB. I found it to be a fun exercise. I wanted to do the same thing with a Gaussian blur filter, so as to eventually solve some super-resolution problems. But I have found this much more challenging. There are several questions on here about such a topic. All of the answers in those topics have been very helpful. However, I have not been able to create the kernel successfully. Currently my code simply implements an averaging filter. Wiki page for Gaussian blur: https://en.wikipedia.org/wiki/Gaussian_blur
Here is my code:
function out_im = GaussianBlur(in_im, sigma, N) %input image, sigma, Number of iterations
[rows, cols] = size(in_im);
orig_im = in_im;
Filter = zeros(rows,cols);
w2 = 0;
for k = 1 : N
for LR_ROWS = 2 : 1 : rows - 1
for LR_COLS = 2 : 1 : cols -1
for i = -1 : 1
for j = -1 : 1
w1 = exp(-(orig_im(LR_ROWS, LR_COLS).^2/(2*sigma^2)));
w2 = w2 + w1;
Filter(LR_ROWS, LR_COLS) = Filter(LR_ROWS, LR_COLS) + ...
w1*orig_im(LR_ROWS+i, LR_COLS+j);
end
end
Filter(LR_ROWS, LR_COLS) = Filter(LR_ROWS, LR_COLS)./w2;
w2 = 0;
end
end
orig_im = Filter;
Filter = zeros(rows, cols);
end
out_im = orig_im;
end
Any suggestions or advice is greatly appreciated! Thank you.
Related
I am trying to solve this differential equation :
In order to do so : there is a mathematical part consisting on determining the boundary conditions of this problem that are defined as :
More precisely, the first BC is formulated differently since we need an extra condition in order to "close" the system.
We choose to write : . Then, we solve asymptotically the new problem : . Therefore, there are 5 roots as a solution to this differential equation : 2 of these roots are growing exponentials that the following condition allows to eliminate :
We deduce two BC written as :
In addition to the BC :
And : F(0) = 1
I have tried to implement this problem by using bvp5c of Matlab but I haven't succeeded. Here is my code :
eta0 = 20;
etamesh = [linspace(-eta0,0,100),linspace(0,eta0,100)];
Finit = [1; 0.5; 1; 0; 0];
solinit = bvpinit(etamesh,Finit)
sol = bvp5c(#fun_ode, #bc, solinit);
figure()
plot(sol.x,sol.y(1,:),'linewidth',1.5) % Plotting F
ylim([-2 2]);
hold on
plot(sol.x,sol.y(3,:),'linewidth',1.5) % Plotting F''
plot(sol.x,sol.y(5,:),'linewidth',1.5) % Plotting F''''
grid on
legend("F","F''","F''''",'location','northwest')
function dFdeta = fun_ode(eta,F,region)
dFdeta=[F(2);F(3);F(4);F(5);(1-F(1))/F(1)^3];
end
% Boundary conditions :
% F'''(-eta0) = F''''(-eta0) = 0
% F(0) = 1
% [1] = [2] = 0 in eta = eta0
function res = bc(FL,FR)
alpha = cos(3*pi/5) + 1i*sin(3*pi/5);
beta = cos(7*pi/5) + 1i*sin(7*pi/5);
gamma = abs(alpha);
res = [FL(4,1) ; FL(5,1);...
FR(1,1) - 1 ;...
FR(1,1)-FL(1,2) ; FR(2,1)-FL(2,2) ; FR(3,1)-FL(3,2) ; FR(4,1)-FL(4,2) ; FR(5,1)-FL(5,2) ;...
(1-(beta+alpha))*FR(3,2) + (-(alpha+beta)+gamma^2)*FR(2,2) ; (-(beta+alpha)+gamma^2)*FR(3,2) + (FR(2,2))*gamma^2];
end
I am mainly struggling with the implementation of the two coupled BC written above and the continuity condition for F(0)...
I have commented my code and I would be grateful if someone give some time to look for potential errors or providing some advices.
Thank you very much.
I have an implementation of a convolution neural network in MATLAB (from the open source DeepLearnToolbox). The following code finds the convolution of different weights and parameters:
z = z + convn(net.layers{l - 1}.a{i}, net.layers{l}.k{i}{j}, 'valid');
To update the tool, I have implemented my own fixed-point scheme based convolution using the following code:
function result = convolution(image, kernal)
% find dimensions of output
row = size(image,1) - size(kernal,1) + 1;
col = size(image,2) - size(kernal,2) + 1;
zdim = size(image,3);
%create output matrix
output = zeros(row, col);
% flip the kernal
kernal_flipped = fliplr(flipud(kernal));
%find rows and col of kernal for loop iteration
row_ker = size(kernal_flipped,1);
col_ker = size(kernal_flipped,2);
for k = 1 : zdim
for i = 0 : row-1
for j = 0 : col-1
sum = fi(0,1,8,7);
prod = fi(0,1,8,7);
for k_row = 1 : row_ker
for k_col = 1 : col_ker
a = image(k_row+i, k_col+j, k);
b = kernal_flipped(k_row,k_col);
prod = a * b;
% convert to fixed point
prod = fi((product/16384), 1, 8, 7);
sum = fi((sum + prod), 1, 8, 7);
end
end
output(i+1, j+1, k) = sum;
end
end
end
result = output;
end
The problem is that when I use my convolution implementation in the bigger application, it is super slow.
Any suggestions how to improve its execution time?
MATLAB doesn't support fixed point 2D convolution, but knowing that convolution can be written as matrix multiplication and that MATLAB has support for fixed point matrix multiplication you can use im2col to convert the image into column format and multiply it by the kernel to convolve them.
row = size(image,1) - size(kernal,1) + 1;
col = size(image,2) - size(kernal,2) + 1;
zdim = size(image,3);
output = zeros(row, col);
kernal_flipped = fliplr(flipud(kernal));
fi_kernel = fi(kernal_flipped(:).', 1, 8, 7) / 16384;
sz = size(kernal_flipped);
sz_img = size(image);
% Use the generated indexes to convert the image into column format
idx_col = im2col(reshape(1:numel(image)/zdim,sz_img(1:2)),sz,'sliding');
image = reshape(image,[],zdim);
for k = 1:zdim
output(:,:,k) = double(fi_kernel * reshape(image(idx_col,k),size(idx_col)));
end
I've been struggling with this for quite some time now. I cant seem to figure out why I have a percentage error in the thousands. I'm trying to figure out a perceptron between X1 and X2 which are Gaussian distributed data sets with distinct means and identical covariances. My code:
N=200;
X = [X1; X2];
X = [X ones(N,1)]; %bias
y = [-1*ones(N/2,1); ones(N/2,1)]; %classification
%Split data into training and test
ii = randperm(N);
Xtr = X(ii(1:N/2),:);
ytr = X(ii(1:N/2),:);
Xts = X(ii(N/2+1:N),:);
yts = y(ii(N/2+1:N),:);
w = randn(3,1);
eta = 0.001;
%learn from training set
for iter=1:500
j = ceil(rand*N/2);
if( ytr(j)*Xtr(j,:)*w < 0)
w = w + eta*Xtr(j,:)';
end
end
%apply what you have learnt to test set
yhts = Xts * w;
disp([yts yhts])
PercentageError = 100*sum(find(yts .*yhts < 0))/Nts;
Any help would be appreciated. Thank you
You have a bug in your error calculation.
On this line:
PercentageError = 100*sum(find(yts .*yhts < 0))/Nts;
The find is returning indices of the matching items. For your accuracy measure you don't want those, you just want the count:
PercentageError = 100*sum( yts .*yhts < 0 )/Nts;
If I generate X1 = randn(100,2); X2 = randn(100,2); and assume Nts=100, I get 2808% for your code, and expected 50% error (no better than guessing because my test data cannot be separated) for the corrected version.
Update - the perceptron model had a more subtle bug too, see: https://datascience.stackexchange.com/questions/2353/matlab-perceptron
I try to implement image compression using Burrows-Wheeler transform. Consider an 1D matrix from path scanning is:
p = [2 5 4 2 3 1 5];
and then apply the Burrows-Wheeler transform :
function output = bwtenc(p)
n = numel(p);
x = zeros(length(p),1);
for i = 1:length(p)
left_cyclic = mod(bsxfun(#plus, 1:n, (0:n-1).')-1, n) + 1;
x = p(left_cyclic);
end
[lex ind] = sortrows(x);
output = lex(:,end);
output = uint8(output(:)');
end
And it works! But the problem is when i try to implement 1D matrix from Lena.bmp which the size is 512*512, Error message showing that bsxfun is out of memory. Anyone please help me.
See if this works for you -
function output = bwtenc(p)
np = numel(p);
[~,sorted_ind] = sort(p);
ind1 = mod((1:np)+np-2,np)+1;
output = p(ind1(sorted_ind));
output = uint8(output(:)');
end
I implemented a method for removing shadows based on invariant color features found in the paper Entropy Minimization for Shadow Removal. My implementation seems to be yielding similar computational results sometimes, but they are always off, and my grayscale image is blocky, maybe as a result of incorrectly taking the geometric mean.
Here is an example plot of the information potential from the horse image in the paper as well as my invariant image. Multiply the x-axis by 3 to get theta(which goes from 0 to 180):
And here is the grayscale Image my code outputs for the correct maximum theta (mine is off by 10):
You can see the blockiness that their image doesn't have:
Here is their information potential:
When dividing by the geometric mean, I have tried using NaN and tresholding the image so the smallest possible value is .01, but it doesn't seem to change my output.
Here is my code:
I = im2double(imread(strname));
[m,n,d] = size(I);
I = max(I, .01);
chrom = zeros(m, n, 3, 'double');
for i = 1:m
for j = 1:n
% if ((I(i,j,1)*I(i,j,2)*I(i,j,3))~= 0)
chrom(i,j, 1) = I(i,j,1)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
chrom(i,j, 2) = I(i,j,2)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
chrom(i,j, 3) = I(i,j,3)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
% else
% chrom(i,j, 1) = 1;
% chrom(i,j, 2) = 1;
% chrom(i,j, 3) = 1;
% end
end
end
p1 = mat2gray(log(chrom(:,:,1)));
p2 = mat2gray(log(chrom(:,:,2)));
p3 = mat2gray(log(chrom(:,:,3)));
X1 = mat2gray(p1*1/(sqrt(2)) - p2*1/(sqrt(2)));
X2 = mat2gray(p1*1/(sqrt(6)) + p2*1/(sqrt(6)) - p3*2/(sqrt(6)));
maxinf = 0;
maxtheta = 0;
data2 = zeros(1, 61);
for theta = 0:3:180
M = X1*cos(theta*pi/180) - X2*sin(theta*pi/180);
s = sqrt(std2(X1)^(2)*cos(theta*pi/180) + std2(X2)^(2)*sin(theta*pi/180));
s = abs(1.06*s*((m*n)^(-1/5)));
[m, n] = size(M);
length = m*n;
sources = zeros(1, length, 'double');
count = 1;
for x=1:m
for y = 1:n
sources(1, count) = M(x , y);
count = count + 1;
end
end
weights = ones(1, length);
sigma = 2*s;
[xc , Ak] = fgt_model(sources , weights , sigma , 10, sqrt(length) , 6 );
sum1 = sum(fgt_predict(sources , xc , Ak , sigma , 10 ));
sum1 = sum1/sqrt(2*pi*2*s*s);
data2(theta/3 + 1) = sum1;
if (sum1 > maxinf)
maxinf = sum1;
maxtheta = theta;
end
end
InvariantImage2 = cos(maxtheta*pi/180)*X1 + sin(maxtheta*pi/180)*X2;
Assume the Fast Gauss Transform is correct.
I don't know whether this makes any difference as it is more than a month now, but the blockiness and different information potential plot is simply caused by compression of the used image. You can't expect to be getting same results using this image as they had, because they have used raw, high resolution uncompressed version of it. I have to say I am fairly impressed with your results, especially with implementing the information potential. That thing went over my head a little.
John.