Optimal way to multiply 3D matrices in GNU/Octave - png

We have a 50x50x3 matrix i loaded with imread from a PNG file, made of 0s and 1s (fore/background). The image is to be inked with a given color, a vector c of 1x3 numbers in [0,1] (RGB), resulting in c(k) replacing the 1s in i(:,:,k) (k in [1...3]), and leaving the 0s unchanged.
What would be the expression with a minimal computing time to perform this operation (sort of matrix multiplication) on these to variables?

Start by reshaping c to be a 1x1x3 array:
c = reshape(c,1,1,3);
Next, do a point-wise multiplication. Octave (as does the newest MATLAB) does implicit singleton expansion (aka broadcasting):
i = i .* c;

Related

Indexing 3d matrix with 2d Matrix plus vector

I have a m * n * k Matrix called M which I want to index to get the mean of some Data.
I have a logical m * n matrix called EZG and want to apply this on every of the k-th dimension from 1:(end-1) (call this vector V).
Any chance to write it without a loop like this:
M=rand(3,3,3)
EZG=logical([1,1,1;0,1,0;0,0,1])
V=1:size(M,3)-1
mean(mean(M(EZG,V)1),2)
Result should be a onedimensional vector of the length of V.
Thank you
I think this is what you want:
M=rand(3,3,3);
EZG=logical([1,1,1;0,1,0;0,0,1]);
% repeat EZG K-1 times, and add zeros to the Kth slice
V=cat(3,repmat(EZG,1,1,size(M,3)-1),false(size(M,1),size(M,2)));
% logical index and mean
m=mean(M(V));

Difference between matlab normrnd() mvnrnd()

What is the difference in Matlab if I do:
R = mvnrnd(MU,SIGMA)
vs.
R = normrnd(mu,sigma)
R = normrnd(mu, sigma) outputs normal random numbers from 1D normal distribution. i.e., for each element of the inputs a single output is generated - R(i) will be a random scalar from the normal distribution defined by mu(i) and sigma(i).
If sigma is a scalar rather than vector the same value is used for each element of R.
R = mvnrnd(MU,SIGMA) outputs random numbers from multivariate normal distribution. i.e., for each row of MU and 2D matrix of SIGMA a single row of R is generated, where the dimension (d, the number of columns) of R is the same as the dimension of MU and SIGMA. R(i,:) will be a random vector from the multivariate normal distribution defined by MU(i,:) and SIGMA(:,:,i).
if SIGMA is a dxd 2D matrix rather than dxdxn 3D matrix, the same 2D matrix is used for each row of R.

How to normalize a matrix of 3-D vectors

I have a 512x512x3 matrix that stores 512x512 there-dimensional vectors. What is the best way to normalize all those vectors, so that my result are 512x512 vectors with length that equals 1?
At the moment I use for loops, but I don't think that is the best way in MATLAB.
If the vectors are Euclidean, the length of each is the square root of the sum of the squares of its coordinates. To normalize each vector individually so that it has unit length, you need to divide its coordinates by its norm. For that purpose you can use bsxfun:
norm_A = sqrt(sum(A .^ 2, 3)_; %// Calculate Euclidean length
norm_A(norm_A < eps) == 1; %// Avoid division by zero
B = bsxfun(#rdivide, A, norm_A); %// Normalize
where A is your original 3-D vector matrix.
EDIT: Following Shai's comment, added a fix to avoid possible division by zero for null vectors.

Using SVD to compress an image in MATLAB

I am brand new to MATLAB but am trying to do some image compression code for grayscale images.
Questions
How can I use SVD to trim off low-valued eigenvalues to reconstruct a compressed image?
Work/Attempts so far
My code so far is:
B=imread('images1.jpeg');
B=rgb2gray(B);
doubleB=double(B);
%read the image and store it as matrix B, convert the image to a grayscale
photo and convert the matrix to a class 'double' for values 0-255
[U,S,V]=svd(doubleB);
This allows me to successfully decompose the image matrix with eigenvalues stored in variable S.
How do I truncate S (which is 167x301, class double)? Let's say of the 167 eigenvalues I want to take only the top 100 (or any n really), how do I do that and reconstruct the compressed image?
Updated code/thoughts
Instead of putting a bunch of code in the comments section, this is the current draft I have. I have been able to successfully create the compressed image by manually changing N, but I would like to do 2 additional things:
1- Show a pannel of images for various compressions (i/e, run a loop for N = 5,10,25, etc.)
2- Somehow calculate the difference (error) between each image and the original and graph it.
I am horrible with understanding loops and output, but this is what I have tried:
B=imread('images1.jpeg');
B=rgb2gray(B);
doubleB=im2double(B);%
%read the image and store it as matrix B, convert the image to a grayscale
%photo and convert the image to a class 'double'
[U,S,V]=svd(doubleB);
C=S;
for N=[5,10,25,50,100]
C(N+1:end,:)=0;
C(:,N+1:end)=0;
D=U*C*V';
%Use singular value decomposition on the image doubleB, create a new matrix
%C (for Compression diagonal) and zero out all entries above N, (which in
%this case is 100). Then construct a new image, D, by using the new
%diagonal matrix C.
imshow(D);
error=C-D;
end
Obviously there are some errors because I don't get multiple pictures or know how to "graph" the error matrix
Although this question is old, it has helped me a lot to understand SVD. I have modified the code you have written in your question to make it work.
I believe you might have solved the problem, however just for the future reference for anyone visiting this page, I am including the complete code here with the output images and graph.
Below is the code:
close all
clear all
clc
%reading and converting the image
inImage=imread('fruits.jpg');
inImage=rgb2gray(inImage);
inImageD=double(inImage);
% decomposing the image using singular value decomposition
[U,S,V]=svd(inImageD);
% Using different number of singular values (diagonal of S) to compress and
% reconstruct the image
dispEr = [];
numSVals = [];
for N=5:25:300
% store the singular values in a temporary var
C = S;
% discard the diagonal values not required for compression
C(N+1:end,:)=0;
C(:,N+1:end)=0;
% Construct an Image using the selected singular values
D=U*C*V';
% display and compute error
figure;
buffer = sprintf('Image output using %d singular values', N)
imshow(uint8(D));
title(buffer);
error=sum(sum((inImageD-D).^2));
% store vals for display
dispEr = [dispEr; error];
numSVals = [numSVals; N];
end
% dislay the error graph
figure;
title('Error in compression');
plot(numSVals, dispEr);
grid on
xlabel('Number of Singular Values used');
ylabel('Error between compress and original image');
Applying this to the following image:
Gives the following result with only first 5 Singular Values,
with first 30 Singular Values,
and the first 55 Singular Values,
The change in error with increasing number of singular values can be seen in the graph below.
Here you can notice the graph is showing that using approximately 200 first singular values yields to approximately zero error.
Just to start, I assume you're aware that the SVD is really not the best tool to decorrelate the pixels in a single image. But it is good practice.
OK, so we know that B = U*S*V'. And we know S is diagonal, and sorted by magnitude. So by using only the top few values of S, you'll get an approximation of your image. Let's say C=U*S2*V', where S2 is your modified S. The sizes of U and V haven't changed, so the easiest thing to do for now is to zero the elements of S that you don't want to use, and run the reconstruction. (Easiest way to do this: S2=S; S2(N+1:end, :) = 0; S2(:, N+1:end) = 0;).
Now for the compression part. U is full, and so is V, so no matter what happens to S2, your data volume doesn't change. But look at what happens to U*S2. (Plot the image). If you kept N singular values in S2, then only the first N rows of S2 are nonzero. Compression! Except you still have to deal with V. You can't use the same trick after you've already done (U*S2), since more of U*S2 is nonzero than S2 was by itself. How can we use S2 on both sides? Well, it's diagonal, so use D=sqrt(S2), and now C=U*D*D*V'. So now U*D has only N nonzero rows, and D*V' has only N nonzero columns. Transmit only those quantities, and you can reconstruct C, which is approximately like B.
For example, here's a 512 x 512 B&W image of Lena:
We compute the SVD of Lena. Choosing the singular values above 1% of the maximum singular value, we are left with just 53 singular values. Reconstructing Lena with these singular values and the corresponding (left and right) singular vectors, we obtain a low-rank approximation of Lena:
Instead of storing 512 * 512 = 262144 values (each taking 8 bits), we can store 2 x (512 x 53) + 53 = 54325 values, which is approximately 20% of the original size. This is one example of how SVD can be used to do lossy image compression.
Here's the MATLAB code:
% open Lena image and convert from uint8 to double
Lena = double(imread('LenaBW.bmp'));
% perform SVD on Lena
[U,S,V] = svd(Lena);
% extract singular values
singvals = diag(S);
% find out where to truncate the U, S, V matrices
indices = find(singvals >= 0.01 * singvals(1));
% reduce SVD matrices
U_red = U(:,indices);
S_red = S(indices,indices);
V_red = V(:,indices);
% construct low-rank approximation of Lena
Lena_red = U_red * S_red * V_red';
% print results to command window
r = num2str(length(indices));
m = num2str(length(singvals));
disp(['Low-rank approximation used ',r,' of ',m,' singular values']);
% save reduced Lena
imwrite(uint8(Lena_red),'Reduced Lena.bmp');
taking the first n max number of eigenvalues and their corresponding eigenvectors may solve your problem.For PCA, the original data multiplied by the first ascending eigenvectors will construct your image by n x d where d represents the number of eigenvectors.

Vector to Matrix syntax in MATLAB

Is there a way to combine 2 vectors in MATLAB such that:
mat = zeros(length(C),length(S));
for j=1:length(C)
mat(j,:)=C(j)*S;
end
Using normal MATLAB syntax similar to:
mat = C * S(1:length(S))
This gives a "Inner matrix dimensions must agree error" because it's trying to do normal matrix operations. This is not a standard Linear Algebra operation so I'm not sure how to correctly express it in MATLAB, but it seems like it should be possible without requiring a loop, which is excessively slow in MATLAB.
From your description, it sounds like a simple matrix operation. You just have to make sure you have the right dimensions for C and S. C should be a column vector (length(C)-by-1) and S should be a row vector (1-by-length(S)). Assuming they are the right dimensions, just do the following:
mat = C*S;
If you're not sure of their dimensions, this should work:
mat = (C(:))*(S(:)');
EDIT: Actually, I went a little crazy with the parentheses. Some of them are unnecessary, since there are no order-of-operation concerns. Here's a cleaner version:
mat = C(:)*S(:)';
EXPLANATION:
The matrix multiplication operator in MATLAB will produce either an inner product (resulting in a scalar value) or an outer product (resulting in a matrix) depending on the dimensions of the vectors it is applied to.
The last equation above produces an outer product because of the use of the colon operator to reshape the dimensions of the vector arguments. The syntax C(:) reshapes the contents of C into a single column vector. The syntax S(:)' reshapes the contents of S into a column vector, then transposes it into a row vector. When multiplied, this results in a matrix of size (length(C)-by-length(S)).
Note: This use of the colon operator is applicable to vectors and matrices of any dimension, allowing you to reshape their contents into a single column vector (which makes some operations easier, as shown by this other SO question).
Try executing this in MATLAB:
mat = C*S'
As In:
C = [1; 2; 3];
S = [2; 2; 9; 1];
mat = zeros(length(C),length(S));
for j=1:length(C)
mat(j,:)=C(j)*S;
end
% Equivalent code:
mat2 = C*S';
myDiff = mat - mat2
Do you mean the following?
mat = zeros(length(C),length(S));
for j=1:length(C)
mat(j,:)=C(j)*S;
end
If so, it's simply matrix multiplication:
C' * S % if C and S are row vectors
C * S' % if C and S are column vectors
If you don't know whether C and S are row vectors or column vectors, you can use a trick to turn them into column vectors, then transpose S before multiplying them:
C(:) * S(:)'
I'm not entirely clear on what you're doing - it looks like your resulting matrix will consist of length(C) rows, where the ith row is the vector S scaled by the ith entry of C (since subscripting a vector gives a scalar). In this case, you can do something like
mat = repmat(C,[1 length(S)]) .* repmat(S, [length(C) 1])
where you tile C across columns, and S down rows.
Try this:
C = 1:3
S = 1:5
mat1 = C'*S
mat2 = bsxfun(#times, C',S)
(esp. good when the function you need isn't simpler MATLAB notation)
--Loren
Try using meshgrid:
[Cm, Sm] = meshgrid(C, S);
mat = Cm .* Sm;
edit: nevermind, matrix multiplication will do too. You just need one column vector C and one row vector S. Then do C * S.