PSNR for intra predicted frame vs encoded frame - matlab

I have to perform Intra predicted coding on a video frame and calculate its PSNR. I am now asked to take the same original frame and encode it which consists of performing DCT, quantization, dequantization and inverse DCT. I have to calculate the PSNR of the encoded frame and compare it with the intra predicted frame.
I got the values of 53.37 db for the intra predicted frame and 32.64 db for the encoded frame. I am supposed to analyze the probability distribution of the encoded image using the histogram. Histogram for both frames look extremely similar so what am I actually supposed to look for?
EDIT
The way I am calculating the PSNR is taking the difference between the original frame and the reconstructed frame and then using the PSNR formula. Code snippet shown below:
errorFrame = orgFrame - reconstFrame;
y = 10*log10(255*255/mean(mean((errorFrame.^2))));
Should the PSNR of the intra predicted frame and the reconstructed frame be the same value? I have uploaded the histogram of the reconstructed frame with intra prediction and reconstructed frame without intra prediction
The histrograms look extremely similar so why is the PSNR value so different?

The PSNR does a point-by-point comparison between two images. The histograms capture the entire distribution of intensities as a whole. For example, if you had an image that was:
A = [0 255;
255 0];
... and another that was:
B = [255 0;
0 255];
... and let's say original image was
C = [0 128;
128 0];.
Even though the histograms between A and B are the same, the PSNRs are 9.0650 and 2.0344 dB respectively. As such, I wouldn't rely on the histograms themselves as they only capture global information. Look at it locally. You can obviously see one has higher quality than the other. In your histograms, though most of the bins of the histograms look equal, but histograms are not spatially aware. In other words, the spatial relationships of pixels are not captured in histograms, as you have seen with my example I gave above. You could have, say, 15 pixels having intensity 80 for both images, but they could be in completely different locations in each of the images. As such, you could have a completely different looking image in comparison to another, but if you counted the amount of pixels per intensity, as long as the counts per intensity are equal, the histograms will be equal.
You can see that A and C are similar in that one is simply the grayer version of the other. However, B is way off as it has white pixels where there are dark pixels in C, and dark pixels when there are gray pixels in C. Though the histograms between A and B are the same, the actual content between them are quite different compared to C.
I do realize that you need to compare the histograms / probability distributions between both of the images, but this question may have been asked on purpose. Though you can see the distribution of intensities is relatively the same, if you analyze local image patches between the two, you can definitely see that one is a lower quality than the other. To be honest, and recounting from personal experience, you should take PSNR with a grain of salt. Just because one image has a higher PSNR than the other doesn't necessarily mean that it is better quality. In fact, there have been images where they were lower PSNR, but I considered them to be better quality than one with higher PSNR.
As such, when you answer your question, make sure you reference everything that I've said here.
tl;dr: Though the histograms look equal, histograms are not spatially aware. The spatial relationships of pixels are not captured in histograms. As such, you could have a completely different looking image in comparison to another, but if you counted the amount of pixels per intensity, as long as the counts per intensity are equal, the histograms will be equal. Even with the histograms being unequal, doing PSNR does a point-by-point difference, and this (sort of) captures the spatial relationships of pixels and thus explains why the PSNRs are quite different.

Related

Why iradon returns negative pixel values?

I took 200 projections at a step angle of 1.8 degrees using LabVIEW software. The size of the image is 2748 x 2748 pixels, uint16. Then using Matlab, I load the projection images, do the flat field correction, resize the image by 1/3 and save the images as .mat file. Then I run the code below for the filtered backprojection.
interp='linear'; %set interpolation: nearest, linear, spline, pchip, v5cubic
filter='Hann'; %set filter: Ram-Lak, Shepp-Logan, Cosine, Hamming, Hann, None
for s=1:916
for i=1:200
a(i,:)=proj065(:,s,i);
end
a=a';
%figure(3), imagesc(a)
b=iradon(a,1.8,interp,filter);
imagesc(b);
recon(:,:,s)=b;
s
clear a
end
If I used a filter in this code, I will get negative pixel values.
But, if I run the code without the filter, I will get positive pixel values.
Any idea why iradon returns negative pixel values in filtered back projection?
Thank you.
Nurul
Yes, the FBP (filtered back-projection) algorithm will do that. It can wrongly reconstruct voxels as having negative values, due to noise and discretization on the data. Nothing you can do about it than just crop those values generally.
As my PhD is about tomography reconstruction algorithms I feel contractually obligated (joking) to suggest the use of iterative algorithms to possibly obtain better images (never worse, often considerably better). Check SART/SIRT or CGLS for this problem.
However, you are calling your function wrong! In tomography, the step size is not enough to reconstruct an image, you generally need the exact angles, thus iradon doesnt accept a step size as an input, it accepts an array of angles.
in your case, theta should be theta=linspace(0,360-200/360,200), and you should call iradon(a,theta,...)

Scale correction for IFFT of smaller frequency space created by FFT

This might be considered a repost of this question however I am seeking a much deeper explanation on this matter and how to properly solve this problem.
I want to study the PSF/SRF of a voxel in a 44x44 matrix. For that I create a matrix 100x bigger (4400x4400) so 1 voxel in the smaller matrix corresponds to 100x100 voxels in the bigger one. I set the values to 1 of those 100^2 voxels.
Now I do a FFT of the big matrix and an IFFT of only the center portion (44x44) of the frequency space. This is the code:
A = zeros(4400,4400);
A(2201:2300,2201:2300) = 1;
B = fftshift(fft2(A));
C = ifft2(ifftshift(B(2179:2222,2179:2222)));
D = numel(C)/numel(B) * C;
figure, subplot(1,3,1), imshow(A), subplot(1,3,2), imshow(real(C)), subplot(1,3,3), imshow(real(D));
The problem is the following: I would expect the value in the voxel of the new 44x44 matrix to be 1. However, using this numel factor correction they decrease to 0.35. And if I don't apply the correction they go up to huge values.
For starters, let me try to clarify the scaling issue: For the DFT/IDFT there are various scaling conventions regarding the input size. You either need a factor of 1/N in the DFT or a factor of 1/N in the IDFT or a factor of 1/sqrt(N) in both. All have pros and cons and all are equally valid.
Matlab uses the 1/N in the IDFT convention, as you can see in the documentation.
In your example, the forward DFT has a size 4400, the backward IDFT a size of 44. Therefore the IDFT scaling is a factor 100 less than it should be to match the forward transformation and your values are a factor of 100 too large. Since you're doing a 2-D DFT/IDFT, the factor 100 is missing twice, so your rescaling should be 100^2. Your numel(C)/numel(B) does exactly that, I've just tried to give you the explanation for it.
A reason why you might not see the 1 is that you're plotting only the real part of the inverse DFT. Since you did some fftshifting you might have introduced a phase so that part of your signal is in the imaginary part.
edit: Another reason is that you truncate B to the central 44 by 44 window before transforming back. Since A is not bandlimited, B has energy also outside this window. By truncating you are losing a part of it. Therefore, it is not surprising that the resulting amplitude is lower.
Here is a zoom on the image of B to show this phenomenon:
The red square is what you keep, everything else is truncated. Due to Parsevals theorem, the total energy in image and Fourier domain is equal so by truncation you must also reduce the energy of your signal in the image domain.

adjust the gray level of an image according to its statistical distribution [duplicate]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Please explain as to what happens to an image when we use histeq function in MATLAB? A mathematical explanation would be really helpful.
Histogram equalization seeks to flatten your image histogram. Basically, it models the image as a probability density function (or in simpler terms, a histogram where you normalize each entry by the total number of pixels in the image) and tries to ensure that the probability for a pixel to take on a particular intensity is equiprobable (with equal probability).
The premise behind histogram equalization is for images that have poor contrast. Images that look like they're too dark, or if they're too washed out, or if they're too bright are good candidates for you to apply histogram equalization. If you plot the histogram, the spread of the pixels is limited to a very narrow range. By doing histogram equalization, the histogram will thus flatten and give you a better contrast image. The effect of this with the histogram is that it stretches the dynamic range of your histogram.
In terms of the mathematical definition, I won't bore you with the details and I would love to have some LaTeX to do it here, but it isn't supported. As such, I defer you to this link that explains it in more detail: http://www.math.uci.edu/icamp/courses/math77c/demos/hist_eq.pdf
However, the final equation that you get for performing histogram equalization is essentially a 1-to-1 mapping. For each pixel in your image, you extract its intensity, then run it through this function. It then gives you an output intensity to be placed in your output image.
Supposing that p_i is the probability that you would encounter a pixel with intensity i in your image (take the histogram bin count for pixel intensity i and divide by the total number of pixels in your image). Given that you have L intensities in your image, the output intensity at this location given the intensity of i is dictated as:
g_i = floor( (L-1) * sum_{n=0}^{i} p_i )
You add up all of the probabilities from pixel intensity 0, then 1, then 2, all the way up to intensity i. This is familiarly known as the Cumulative Distribution Function.
MATLAB essentially performs histogram equalization using this approach. However, if you want to implement this yourself, it's actually pretty simple. Assume that you have an input image im that is of an unsigned 8-bit integer type.
function [out] = hist_eq(im, L)
if (~exist(L, 'var'))
L = 256;
end
h = imhist(im) / numel(im);
cdf = cumsum(h);
out = (L-1)*cdf(double(im)+1);
out = uint8(out);
This function takes in an image that is assumed to be unsigned 8-bit integer. You can optionally specify the number of levels for the output. Usually, L = 256 for an 8-bit image and so if you omit the second parameter, L would be assumed as such. The first line computes the probabilities. The next line computes the Cumulative Distribution Function (CDF). The next two lines after compute input/output using histogram equalization, and then convert back to unsigned 8-bit integer. Note that the uint8 casting implicitly performs the floor operation for us. You'll need to take note that we have to add an offset of 1 when accessing the CDF. The reason why is because MATLAB starts indexing at 1, while the intensities in your image start at 0.
The MATLAB command histeq pretty much does the same thing, except that if you call histeq(im), it assumes that you have 32 intensities in your image. Therefore, you can override the histeq function by specifying an additional parameter that specifies how many intensity values are seen in the image just like what we did above. As such, you would do histeq(im, 256);. Calling this in MATLAB, and using the function I wrote above should give you identical results.
As a bit of an exercise, let's use an image that is part of the MATLAB distribution called pout.tif. Let's also show its histogram.
im = imread('pout.tif');
figure;
subplot(2,1,1);
imshow(im);
subplot(2,1,2);
imhist(im);
As you can see, the image has poor contrast because most of the intensity values fit in a narrow range. Histogram equalization will flatten the image and thus increase the contrast of the image. As such, try doing this:
out = histeq(im, 256); %//or you can use my function: out = hist_eq(im);
figure;
subplot(2,1,1);
imshow(out);
subplot(2,1,2);
imhist(out);
This is what we get:
As you can see the contrast is better. Darker pixels tend to move towards the darker end, while lighter pixels get pushed towards the lighter end. Successful result I think! Bear in mind that not all images will give you a good result when you try and do histogram equalization. Image processing is mostly a trial and error thing, and so you put a mishmash of different techniques together until you get a good result.
This should hopefully get you started. Good luck!

Image Parameters (Standard Deviation, Mean and Entropy) of an RGB Image

I couldn't find an answer for RGB image.
How can someone get a value of SD,mean and Entropy of RGB image using MATLAB?
From http://airccse.org/journal/ijdms/papers/4612ijdms05.pdf TABLE3, it seems he got one answer so did he get the average of the RGB values?
Really in need of any help.
After reading the paper, because you are dealing with colour images, you have three channels of information to access. This means that you could alter one of the channels for a colour image and it could still affect the information it's trying to portray. The author wasn't very clear on how they were obtaining just a single value to represent the overall mean and standard deviation. Quite frankly, because this paper was published in a no-name journal, I'm not surprised how they managed to get away with it. If this was attempted to be published in more well known journals (IEEE, ACM, etc.), this would probably be rejected outright due to that very ambiguity.
On how I interpret this procedure, averaging all three channels doesn't make sense because you want to capture the differences over all channels. Doing this averaging will smear that information and those differences get lost. Practically speaking, if you averaged all three channels, should one channel change its intensity by 1, and when you averaged the channels together, the reported average would be so small that it probably would not register as a meaningful difference.
In my opinion, what you should perhaps do is treat the entire RGB image as a 1D signal, then perform the mean, standard deviation and entropy of that image. As such, given an RGB image stored in image_rgb, you can unroll the entire image into a 1D array like so:
image_1D = double(image_rgb(:));
The double casting is important because you want to maintain floating point precision when calculating the mean and standard deviation. The images will probably be of an unsigned integer type, and so this casting must be done to maintain floating point precision. If you don't do this, you may have calculations that get saturated or clamped beyond the limits of that data type and you won't get the right answer. As such, you can calculate the mean, standard deviation and entropy like so:
m = mean(image_1D);
s = std(image_1D);
e = entropy(image_1D);
entropy is a function in MATLAB that calculates the entropy of images so you should be fine here. As noted by #CitizenInsane in his answer, entropy unrolls a grayscale image into a 1D vector and applies the Shannon definition of entropy on this 1D vector. In a similar token, you can do the same thing with a RGB image, but we have already unrolled the signal into a 1D vector anyway, and so the input into entropy will certainly be well suited for the unrolled RGB image.
I have no idea how the author actually did it. But what you could do, is to treat the image as a 1D-array of size WxHx3 and then simply calculate the mean and standard deviation.
Don't know if table 3 is obtain in the same way but at least looking at entropy routine in image toolbox of matlab, RGB values are vectorized to single vector:
I = imread('rgb'); % Read RGB values
I = I(:); % Vectorization of RGB values
p = imhist(I); % Histogram
p(p == 0) = []; % remove zero entries in p
p = p ./ numel(I); % normalize p so that sum(p) is one.
E = -sum(p.*log2(p));

Normalization of intensity, matlab

I have real world 3D points which I want to project on a plane. The most of intensity [0-1] values fall in lower region (near zero).
Please see image 'before' his attched below.
I tried to normalize values
Col_=Intensity; % before
max(Col_)=0.46;min(Col_)=0.06;
Col=(Col_-min(Col_))/(max(Col_)-min(Col_));% after
max(Col)=1;min(Col)=0;
But still i have maximum values falling in lower region (near zero).
Please see second fig after normalization.
Result is still most of black region.Any suggestion. How can I strech my intensity information.
regards,!
It looks like you have already normalized as much as you can with linear scaling. If you want to get more contrast, you will have to give up preserving the original scaling and use a non-linear equalization.
For example: http://en.wikipedia.org/wiki/Histogram_equalization
If you have the image processing toolbox, matlab will do it for you:
http://www.mathworks.com/help/toolbox/images/ref/histeq.html
It looks like you have very few values outside the first bin, if you don't need to preserve the uniqueness of the intensities, you could just scale by a larger amount and clip the few that exceed 1.
When I normalize intensities I do something like this:
Col = Col - min(Col(:));
Col = Col/max(Col(:));
This will normalize your data points to the range [0,1].
Now, since you have many small values, you might be able to make out small changes better through log scaling.
Col_scaled = log(1+Col);
Linear scaling with such data rarely works for me. Using the log function is akin to tweaking gamma for visualization purposes.
I think the only thing you can do here is reduce the range.
After normalization do the following:
t = 0.1;
Col(Col > t) = t;
This will simply truncate the range of the data, which may be sufficient for what you are doing. Then you can re-normalize again if you wish.