Why filter2D in OpenCV gives different results than imfilter in Matlab? - matlab

I have an original image:
I then read it, create a PSF, and blur it in Matlab:
lenawords1=imread('lenawords.bmp');
%create PSF
sigma=6;
PSFgauss=fspecial('gaussian', 8*sigma+1, sigma);
%blur it
lenablur1=imfilter(lenawords1, PSFgauss, 'conv');
lenablurgray1=mat2gray(lenablur1);
PSFgauss1 = PSFgauss/max(PSFgauss(:));
and I saved the blurred image:
imwrite(lenablurgray1, 'lenablur.bmp');
imwrite(PSFgauss1, 'PSFgauss.bmp');
Their values in Matlab and OpenCV match.
Matlab:
disp(lenablurgray1(91:93, 71:75)*256)
142.2222 147.9111 153.6000 159.2889 164.9778
153.6000 164.9778 170.6667 176.3556 176.3556
164.9778 176.3556 182.0444 187.7333 187.7333
disp(PSFgauss1(24:26, 24:26)*256)
248.9867 252.4690 248.9867
252.4690 256.0000 252.4690
248.9867 252.4690 248.9867
OpenCV:
Mat img = imread("lenablur.bmp");
cvtColor(img, img, cv::COLOR_BGR2GRAY);
cv::Mat kernel = imread("PSFgauss.bmp");
cvtColor(kernel, kernel, cv::COLOR_BGR2GRAY);
for (int r = 90; r < 93; r++) {
for (int c = 70; c < 75; c++) {
cout << (int)img.at<uchar>(r, c) << " ";
}
cout << endl;
}
142 147 153 159 164
153 164 ...
164 ...
cout << "PSF" << endl;
for (int r = 23; r < 26; r++) {
for (int c = 23; c < 26; c++) {
cout << (int)kernel.at<uchar>(r, c) << " ";
}
cout << endl;
}
248 251 248
251 255 251
248 251 248
However, the values from filter2D in OpenCV and imfilter in Matlab do not match:
Matlab:
conv1=imfilter(lenablurgray1, PSFgauss1, 'conv');
disp(conv1(91:93, 71:75))
91.8094 96.1109 99.8904 103.1280 105.8210
97.3049 101.7757 105.6828 109.0073 111.7486
102.0122 106.5953 110.5755 113.9353 116.6769
OpenCV:
Mat conv1;
filter2D(img, conv1, img.depth(), kernel, Point(-1, -1), 0,
BORDER_REFLECT);
for (int r = 90; r < 93; r++) {
for (int c = 70; c < 75; c++) {
cout << (int)conv1.at<uchar>(r, c) << " ";
}
cout << endl;
}
255 255 255 255 255
255 255 255 255 255
255 255 255 255 255
Why are the filter2D values wrong?
EDIT2:
cv::Mat kernel = imread("PSFgauss.bmp");
cvtColor(kernel, kernel, cv::COLOR_BGR2GRAY);
kernel.convertTo(kernel, CV_64F);
cv::Scalar kernelsum= cv::sum(kernel);
divide(kernel, kernelsum, kernel);
filter2D(img, conv1, img.depth(), kernel, Point(-1, -1), 0, BORDER_REFLECT);
for (int r = 90; r < 93; r++) {
for (int c = 70; c < 75; c++) {
cout << (int)conv1.at<uchar>(r, c) << " ";
}
gives
103 108 112 116 119
109 ..
115 ..
which matches the Matlab values of conv1 when multiplied by the factor 1.133
disp(conv1(91:93, 71:75) * 1.133)
104.0201 108.8937 113.1758 116.8441 119.8952
110.2464 115.3118 119.7386 123.5053 126.6112
115.5798 120.7725 125.2820 129.0887 132.1950
However, the values differ when I divide img by conv1:
Matlab:
conv2 = lenablurgray1./conv1
disp(conv2(91:93, 71:75))
0.0061 0.0060 0.0060 0.0060 0.0061
0.0062 0.0063 0.0063 0.0063 0.0062
0.0063 0.0065 0.0064 0.0064 0.0063
OpenCV:
Mat conv2;
divide(img, conv1, conv2);
for (int r = 90; r < 93; r++) {
for (int c = 70; c < 75; c++) {
cout << (int)conv2.at<uchar>(r, c) << " ";
}
cout << endl;
}
1 1 1 1 1
1 1 ...
1 ...
why is this?

When you do
lenablur1 = imfilter(lenawords1, PSFgauss, 'conv');
in MATLAB, PSFgauss is normalized. That means that its values sum up to 1:
sum(PSFgauss(:)) == 1.0 % or at least it should be very close
Next, you scale it so its maximum value is 1, so that you can save it as a BMP file. This additionally causes rounding of the values to 256 distinct integers.
Then, in OpenCV, you read in the kernel using imread("PSFgauss.bmp"), and convert back to a grey-value image. This results in a kernel that has integer values in the range [0,255]. In particular, it is not normalized.
What happens then in the convolution is that you multiply each kernel element by an image pixel, and sum up all the values to produce one output value. If the kernel is normalized, this amounts to a weighted averaging. If the kernel is not normalized, the mean image intensity will not be preserved. Since the kernel here has values much larger than it originally had, the output values will be much larger than those of the input image. Because the input image is an 8-bit unsigned integer, and OpenCV uses saturated addition, the operation results in the value 255 for every pixel.
In mathematical notation, in MATLAB you do
g = f * k
(* is convolution, f is the image, k is the kernel). In OpenCV you do
g' = f * Ck
(where C is a constant approximately equal to 255/max(PSFgauss(:), which is the factor by which the kernel was multiplied during the transition from MATLAB to OpenCV).
Thus, dividing by C should bring the kernel back in the state it was when you used it for convolving in MATLAB. But note that the rounding effect you will not be able to remove.
The simplest way of deriving C in OpenCV is to divide kernel by its sum:
kernel.convertTo(kernel, CV_64F);
kernel /= cv::sum(kernel);

Related

2-DCT Image compression matlab

Problem:
I tried implementing Discrete Cosine Transformation compression using matlab.
Input image would a jpg image (Lena) having a size 512 X 512.
There are two stages namely compression and decompression.
Compression and Quantization:
The input image is converted to YCbCr component. Then Y component is taken up
for compression. Further DCT will quantized.
Quantization and Decompression:
The quantized image is undergoes dequantization for decompression.
Issues:
Rectangular boxes are spotted in the decompressed version of the image. Is anything wrong with the code?
For your inference, below are the sample input and output images and followed by the matlab code.
Input image:
Y Component in YCbCr:
Output image:
Code:
clc;
clear all;
close all;
I = imread('lena512.jpg');
figure, imshow(I);
% Y = I;
YCbCr = rgb2ycbcr(I);
figure, imshow(YCbCr);
Y = YCbCr(:,:, 1);
figure, imshow(Y);
[h, w] = size(Y);
r = h/8;
c = w/8;
s = 1;
q50 = [16 11 10 16 24 40 51 61;
12 12 14 19 26 58 60 55;
14 13 16 24 40 57 69 56;
14 17 22 29 51 87 80 62;
18 22 37 56 68 109 103 77;
24 35 55 64 81 104 113 92;
49 64 78 87 103 121 120 101;
72 92 95 98 112 100 103 99];
% COMPRESSION
for i=1:r
e = 1;
for j=1:c
block = Y(s:s+7,e:e+7);
cent = double(block) - 128;
for m=1:8
for n=1:8
if m == 1
u = 1/sqrt(8);
else
u = sqrt(2/8);
end
if n == 1
v = 1/sqrt(8);
else
v = sqrt(2/8);
end
comp = 0;
for x=1:8
for y=1:8
comp = comp + cent(x, y)*(cos((((2*(x-1))+1)*(m-1)*pi)/16))*(cos((((2*(y-1))+1)*(n-1)*pi)/16));
end
end
F(m, n) = v*u*comp;
end
end
for x=1:8
for y=1:8
cq(x, y) = round(F(x, y)/q50(x, y));
end
end
Q(s:s+7,e:e+7) = cq;
e = e + 8;
end
s = s + 8;
end
% % % % % % % % % % % % % % %
% % DECOMPRESSION
% % % % % % %
s = 1;
for i=1:r
e = 1;
for j=1:c
cq = Q(s:s+7,e:e+7);
for x=1:8
for y=1:8
DQ(x, y) = q50(x, y)*cq(x, y);
end
end
for m=1:8
for n=1:8
if m == 1
u = 1/sqrt(8);
else
u = sqrt(2/8);
end
if n == 1
v = 1/sqrt(8);
else
v = sqrt(2/8);
end
comp = 0;
for x=1:8
for y=1:8
comp = comp + u*v*DQ(x, y)*(cos((((2*(x-1))+1)*(m-1)*pi)/16))*(cos((((2*(y-1))+1)*(n-1)*pi)/16));
end
end
bf(m, n) = round(comp)+128;
end
end
Org(s:s+7,e:e+7) = bf;
e = e + 8;
end
s = s + 8;
end
imwrite(Y, 'F:\workouts\phd\jpeg\input.jpg');
imwrite(uint8(Org), 'F:\workouts\phd\jpeg\output.jpg');
return;
Can you suggest me where the error is? It would be helpful.

CUDA fft 2d different results from MATLAB fft on 2d

I have tried to do a simple fft and compare the results between MATLAB and CUDA on 2d arrays.
MATLAB:
array of 9 numbers 1-9
I = [1 2 3
4 5 6
7 8 9];
and use this code:
fft(I)
gives the results:
12.0000 + 0.0000i 15.0000 + 0.0000i 18.0000 + 0.0000i
-4.5000 + 2.5981i -4.5000 + 2.5981i -4.5000 + 2.5981i
-4.5000 - 2.5981i -4.5000 - 2.5981i -4.5000 - 2.5981i
And CUDA code:
int FFT_Test_Function() {
int width = 3;
int height = 3;
int n = width * height;
double in[width][height];
Complex out[width][height];
for (int i = 0; i<width; i++)
{
for (int j = 0; j < height; j++)
{
in[i][j] = (i * width) + j + 1;
}
}
// Allocate the buffer
cufftDoubleReal *d_in;
cufftDoubleComplex *d_out;
unsigned int out_mem_size = sizeof(cufftDoubleComplex)*n;
unsigned int in_mem_size = sizeof(cufftDoubleReal)*n;
cudaMalloc((void **)&d_in, in_mem_size);
cudaMalloc((void **)&d_out, out_mem_size);
// Save time stamp
milliseconds timeStart = getCurrentTimeStamp();
cufftHandle plan;
cufftResult res = cufftPlan2d(&plan, width, height, CUFFT_D2Z);
if (res != CUFFT_SUCCESS) { cout << "cufft plan error: " << res << endl; return 1; }
cudaCheckErrors("cuda malloc fail");
for (int i = 0; i < width; i++)
{
cudaMemcpy(d_in + (i * width), &in[i], height * sizeof(double), cudaMemcpyHostToDevice);
cudaCheckErrors("cuda memcpy H2D fail");
}
cudaCheckErrors("cuda memcpy H2D fail");
res = cufftExecD2Z(plan, d_in, d_out);
if (res != CUFFT_SUCCESS) { cout << "cufft exec error: " << res << endl; return 1; }
for (int i = 0; i < width; i++)
{
cudaMemcpy(&out[i], d_out + (i * width), height * sizeof(Complex), cudaMemcpyDeviceToHost);
cudaCheckErrors("cuda memcpy H2D fail");
}
cudaCheckErrors("cuda memcpy D2H fail");
milliseconds timeEnd = getCurrentTimeStamp();
milliseconds totalTime = timeEnd - timeStart;
std::cout << "Total time: " << totalTime.count() << std::endl;
return 0;
}
In this CUDA code i got the result:
You can see that CUDA gives different results.
What am i missed?
Thank you very much for your attention!
The cuFFT result looks correct, but your FFT code is wrong - it should be:
octave:1> I = [ 1 2 3; 4 5 6; 7 8 9 ]
I =
1 2 3
4 5 6
7 8 9
octave:2> fft2(I)
ans =
45.00000 + 0.00000i -4.50000 + 2.59808i -4.50000 - 2.59808i
-13.50000 + 7.79423i 0.00000 + 0.00000i 0.00000 + 0.00000i
-13.50000 - 7.79423i 0.00000 - 0.00000i 0.00000 - 0.00000i
Note the use of fft2.

openCV bicubic imresize creates negative values

I noticed that when downsampling matrices in openCV using the bicubic interpolation I get negative values even though the original matrix was all positive.
I attach the following code as an example:
// Declaration of variables
cv::Mat M, MLinear, MCubic;
double minVal, maxVal;
cv::Point minLoc, maxLoc;
// Create random values in M matrix
M = cv::Mat::ones(1000, 1000, CV_64F);
cv::randu(M, cv::Scalar(0), cv::Scalar(1));
minMaxLoc(M, &minVal, &maxVal, &minLoc, &maxLoc);
// Printout smallest value in M
std::cout << "smallest value in M = "<< minVal << std::endl;
// Resize M to quarter area with bicubic interpolation and store in MCubic
cv::resize(M, MCubic, cv::Size(0, 0), 0.5, 0.5, cv::INTER_CUBIC);
// Printout smallest value in MCubic
minMaxLoc(MCubic, &minVal, &maxVal, &minLoc, &maxLoc);
std::cout << "smallest value in MCubic = " << minVal << std::endl;
// Resize M to quarter area with linear interpolation and store in MLinear
cv::resize(M, MLinear, cv::Size(0, 0), 0.5, 0.5, cv::INTER_LINEAR);
// Printout smallest value in MLinear
minMaxLoc(MLinear, &minVal, &maxVal, &minLoc, &maxLoc);
std::cout << "smallest value in MLinear = " << minVal << std::endl;
I don't understand why this happens. I noticed that if I choose random values between [0,100] the smallest value is after resizing is typically ~-24 vs. -0.24 for the range of [0,1] as in the code above.
As a comparison, in Matlab this doesn't occur (I am aware of a slight difference in weighting schemes as appears here: imresize comparison - Matlab/openCV).
Here's a short Matlab code snippet that saves the smallest value in any of 1000 random downsized matrices (original dimensions of eahc matrix 1000x1000):
currentMinVal = 1e6;
for k=1:1000
x = rand(1000);
x = imresize(x,0.5);
minVal = min(currentMinVal,min(x(:)));
end
As you can see at this answer the bicubic kernel is not non-negative, therefore, in some cases the negative coefficients may dominate and produce negative outputs.
You should also note that Matlab is using 'Antialiasing' by default, which has an effect on the result:
I = zeros(9);I(5,5)=1;
imresize(I,[5 5],'bicubic') %// with antialiasing
ans =
0 0 0 0 0
0 0.0000 -0.0000 -0.0000 0
0 -0.0000 0.3055 0.0000 0
0 -0.0000 0.0000 0.0000 0
0 0 0 0 0
imresize(I,[5 5],'bicubic','Antialiasing',false) %// without antialiasing
ans =
0 0 0 0 0
0 0.0003 -0.0160 0.0003 0
0 -0.0160 1.0000 -0.0160 0
0 0.0003 -0.0160 0.0003 0
0 0 0 0 0

16-bit CRC-CCITT in Matlab

Given the calcCRC() C function shown below, what is the equivalent Matlab function?
16-bit CRC-CCITT in C:
/*
* FUNCTION: calcCRC calculates a 2-byte CRC on serial data using
* CRC-CCITT 16-bit standard maintained by the ITU
* ARGUMENTS: queue_ptr is pointer to queue holding are a to be CRCed
* queue_size is offset into buffer where to stop CRC calculation
* RETURNS: 2-byte CRC
*/
unsigned short calcCRC(QUEUE_TYPE *queue_ptr, unsigned int queue_size) {
unsigned int i=0, j=0;
unsigned short crc=0x1D0F; //non-augmented initial value equivalent to augmented initial value 0xFFFF
for (i=0; i<queue_size; i+=1) {
crc ^= peekByte(queue_ptr, i) << 8;
for(j=0;j<8;j+=1) {
if(crc & 0x8000) crc = (crc << 1) ^ 0x1021;
else crc = crc << 1;
}
}
return crc;
}
Below is the Matlab code I have come up with that seems to be equivalent but does not output the same results:
(Incorrect) 16-bit CRC-CCITT in Matlab:
function crc_val = crc_ccitt_matlab (message)
crc = uint16(hex2dec('1D0F'));
for i = 1:length(message)
crc = bitxor(crc,bitshift(message(i),8));
for j = 1:8
if (bitand(crc, hex2dec('8000')) > 0)
crc = bitxor(bitshift(crc, 1), hex2dec('1021'));
else
crc = bitshift(crc, 1);
end
end
end
crc_val = crc;
end
Here is a sample byte array, represented as an integer array:
78 48 32 0 251 0 215 166 201 0 1 255 252 0 1 2 166 255 118 255 19 0 0 0 0 0 0 0 0 0 0 0 0 3 0
The expected output is two bytes base10(44 219) which is base2(00101100 11011011) or base10(11483).
My Matlab function gives base10(85) which is base2(00000000 01010101).
Any ideas on what is causing the output to not be the expected?
You should try bitsll() instead of bitshift(). The former is guaranteed to do what you want, whereas the behavior of the latter depends on properties of crc.
You will also need to and with 0xffff at the end.

Matlab : How I can creat a polynomial generator Reed Solomon for QR Code

I have to make a matlab program, which should create a QR Code.
My problem is the Reed Solomon error correction
The user enters the word he wants. [...] I got a string of numbers I should be gone in a polynomial generator (Reed Solomon) (I found some sites that do this very well: http://www.pclviewer.com/rs2/calculator.html)
I would like it to happen: for example I input: 32 91 11 120 209 114 220 77 67 64 236 17 236
[Reed Solomon generator polynomial]
and I want to find out: 168 72 22 82 217 54 156 0 46 15 180 122 16
I found the functions rsenc comm.rsencoder gf ... But it is impossible to understand the operation of these functions. Functions are detailed: http://www.mathworks.fr/fr/help/comm...n.html#fp12225
I tried a code of this type :
n = 255; k = 13; % Codeword length and message length
m = 8; % Number of bits in each symbol
msg = [32 91 11 120 209 114 220 77 67 64 236 17 236]; % Message is a Galois array.
obj = comm.RSEncoder(n, k);
c1 = step(obj, msg(1,:)');
c = [c1].';
He produced a string of 255 while I want 13 output.
Thank you for your help.
I think that you are committing a mistake.
'n' is the length of final message with parity code.
'k' is the lenght of message (number of symbols)
I guess that this will help you:
clc, clear all;
M = 16; % Modulation Order || same that Max value, at your case: 256! 2^m
hEnc = comm.RSEncoder;
hEnc.CodewordLength = M - 1; % Max = M-1, Min = 4, Must be greater than MessageLenght
hEnc.MessageLength = 13; % Experiment change up and down value (using odd number)
hEnc.BitInput = false;
hEnc
t = hEnc.CodewordLength - hEnc.MessageLength;
frame = 2*hEnc.MessageLength; % multiple of MensagemLength
fprintf('\tError Detection (in Symbols): %d\n',t);
fprintf('\tError Correction: %.2f\n',t/2);
data = randi([0 M-1], frame, 1); % Create a frame with symbols range (0 to M-1)
encodedData = step(hEnc, data); % encod the frame