Why is Matlab's unifrnd(a,b) so slow? - matlab

I was doing a simple speed comparison between Matlab and Julia (using the tic, toc command) running a Gibbs sampler with rejection sampling. For Matlab I ran two different versions of the code. One was using the built-in unifrnd(a,b) while the other was drawing uniform random numbers in the interval (a,b) by calling the following function:
% Draw a uniform random sample in the interval (a,b)
function f = rand_uniform(a,b)
f = a + rand*(b - a);
end
I used the same function as above in the Julia code.
The results from running the code with 1000000 iterations were the following:
Case 1: Matlab using unifrnd(a,b):
x_bar = 1.0944
y_bar = 1.1426
Elapsed time is 255.201619 seconds.
Case 2: Matlab calling rand_uniform(a,b) (function above):
x_bar =1.0947
y_bar =1.1429
Elapsed time is 38.704601 seconds.
Case 3: Julia calling rand_uniform(a,b) (function above):
x_bar = 1.0951446303536603
y_bar = 1.142634615899686
Elapsed time: 3.563854193 seconds
Clearly using unifrnd(a,b) slowed the Matlab code down a lot but the question is why? The density is taken from one of the examples in Introduction to Applied Bayesian Statistics and Estimation for Social Scientists and if anyone is interested the theoretical mean, mu_x = 1.095 while mu_y = 1.143.
Case 1: The Matlab code using the built-in unifrnd(a,b) is:
clear;
clc;
tic
% == Setting up matrices and preliminaries == %
d = 1000000; % No. of iterations
b = d*0.25; % Burn-in length. We discard 25% of the sample (not used here)
x = zeros(1,d);
x(1) = -1;
y = zeros(1,d);
y(1) = -1;
m = 25; % The constant multiplied on g(x)
count = 0; % Counting no. of iterations
% == The Gibbs sampler using rejection sampling == %
for i = 2:d
% Sample from x|y
z = 0;
while z == 0
u = unifrnd(0,2);
if ((2*u+3*y(i-1)+2) > unifrnd(0,2)*m) % Height is (1/(b-a))*m = 0.5*25 = 12.5
x(i) = u;
z = 1;
end
end
% Sample from y|x
z = 0;
while z == 0
u = unifrnd(0,2);
if ((2*x(i)+3*u+2) > unifrnd(0,2)*m)
y(i) = u;
z = 1;
end
end
%count = count+1 % For counting no. of total draws from m*g(x)
end
x_bar = mean(x)
y_bar = mean(y)
toc
Case 2: The Matlab code calling rand_uniform(a,b) (function above):
clear;
clc;
tic
% == Setting up matrices and preliminaries == %
d = 1000000; % No. of iterations
b = d*0.25; % Burn-in length. We discard 25% of the sample (not used here)
x = zeros(1,d);
x(1) = -1;
y = zeros(1,d);
y(1) = -1;
m = 25; % The constant multiplied on g(x)
count = 0; % Counting no. of iterations
% == The Gibbs sampler using rejection sampling == %
for i = 2:d
% Sample from x|y
z = 0;
while z == 0
u = rand_uniform(0,2);
if ((2*u+3*y(i-1)+2) > rand_uniform(0,2)*m) % Height is (1/(b-a))*m = 0.5*25 = 12.5
x(i) = u;
z = 1;
end
end
% Sample from y|x
z = 0;
while z == 0
u = rand_uniform(0,2);
if ((2*x(i)+3*u+2) > rand_uniform(0,2)*m)
y(i) = u;
z = 1;
end
end
%count = count+1 % For counting no. of total draws from m*g(x)
end
x_bar = mean(x)
y_bar = mean(y)
toc
Case 3: The Julia code calling rand_uniform(a,b) (function above):
# Gibbs sampling with rejection sampling
tic()
# == Return a uniform random sample from the interval (a, b) == #
function rand_uniform(a, b)
a + rand()*(b - a)
end
# == Setup and preliminaries == #
d = 1000000 # No. of iterations
b = d*0.25 # Burn-in length. We discard 25% of the sample (not used here)
x = zeros(d)
x[1] = -1
y = zeros(d)
y[1] = -1
m = 25
# == The Gibbs sampler using rejection sampling == #
for i in 2:d
#Sample from y|x
z = 0
while z==0
u = rand_uniform(0,2)
if ((2*u+3*y[i-1]+2) > rand_uniform(0,2)*m) #Height is (1/(b-a))*m = 0.5*25 = 12.5
x[i] = u
z = 1
end
end
#Sample from x|y
z = 0
while z == 0
u = rand_uniform(0,2)
if ((2*x[i]+3*u+2) > rand_uniform(0,2)*m)
y[i] = u
z = 1
end
end
end
println("x_bar = ", mean(x))
println("y_bar = ", mean(y))
toc()

Related

Fast approach in matlab to estimate linear regression with AR terms

I am trying to estimate regression and AR parameters for (loads of) linear regressions with AR error terms. (You could also think of this as a MA process with exogenous variables):
, where
, with lags of length p
I am following the official matlab recommendations and use regArima to set up a number of regressions and extract regression and AR parameters (see reproducible example below).
The problem: regArima is slow! For 5 regressions, matlab needs 14.24sec. And I intend to run a large number of different regression models. Is there any quicker method around?
y = rand(100,1);
r2 = rand(100,1);
r3 = rand(100,1);
r4 = rand(100,1);
r5 = rand(100,1);
exo = [r2 r3 r4 r5];
tic
for p = 0:4
Mdl = regARIMA(3,0,0);
[EstMdl, ~, LogL] = estimate(Mdl,y,'X',exo,'Display','off');
end
toc
Unlike the regArima function which uses Maximum Likelihood, the Cochrane-Orcutt prodecure relies on an iteration of OLS regression. There are a few more particularities when this approach is valid (refer to the link posted). But for the aim of this question, the appraoch is valid, and fast!
I modified James Le Sage's code which covers only AR lags of order 1, to cover lags of order p.
function result = olsc(y,x,arterms)
% PURPOSE: computes Cochrane-Orcutt ols Regression for AR1 errors
%---------------------------------------------------
% USAGE: results = olsc(y,x)
% where: y = dependent variable vector (nobs x 1)
% x = independent variables matrix (nobs x nvar)
%---------------------------------------------------
% RETURNS: a structure
% results.meth = 'olsc'
% results.beta = bhat estimates
% results.rho = rho estimate
% results.tstat = t-stats
% results.trho = t-statistic for rho estimate
% results.yhat = yhat
% results.resid = residuals
% results.sige = e'*e/(n-k)
% results.rsqr = rsquared
% results.rbar = rbar-squared
% results.iter = niter x 3 matrix of [rho converg iteration#]
% results.nobs = nobs
% results.nvar = nvars
% results.y = y data vector
% --------------------------------------------------
% SEE ALSO: prt_reg(results), plt_reg(results)
%---------------------------------------------------
% written by:
% James P. LeSage, Dept of Economics
% University of Toledo
% 2801 W. Bancroft St,
% Toledo, OH 43606
% jpl#jpl.econ.utoledo.edu
% do error checking on inputs
if (nargin ~= 3); error('Wrong # of arguments to olsc'); end;
[nobs nvar] = size(x);
[nobs2 junk] = size(y);
if (nobs ~= nobs2); error('x and y must have same # obs in olsc'); end;
% ----- setup parameters
ITERMAX = 100;
converg = 1.0;
rho = zeros(arterms,1);
iter = 1;
% xtmp = lag(x,1);
% ytmp = lag(y,1);
% truncate 1st observation to feed the lag
% xlag = x(1:nobs-1,:);
% ylag = y(1:nobs-1,1);
yt = y(1+arterms:nobs,1);
xt = x(1+arterms:nobs,:);
xlag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
xlag(:,nvar*(tt-1)+1:nvar*(tt-1)+nvar) = x(arterms-tt+1:nobs-tt,:);
end
ylag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
ylag(:,tt) = y(arterms-tt+1:nobs-tt,:);
end
% setup storage for iteration results
iterout = zeros(ITERMAX,3);
while (converg > 0.0001) & (iter < ITERMAX),
% step 1, using intial rho = 0, do OLS to get bhat
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*beta;
% truncate 1st observation to account for the lag
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
% step 2, update estimate of rho using residuals
% from step 1
res_rho = (elagt'*elagt)\elagt' * et;
rho_last = rho;
rho = res_rho;
converg = sum(abs(rho - rho_last));
% iterout(iter,1) = rho;
iterout(iter,2) = converg;
iterout(iter,3) = iter;
iter = iter + 1;
end; % end of while loop
if iter == ITERMAX
% error('ols_corc did not converge in 100 iterations');
print('ols_corc did not converge in 100 iterations');
end;
result.iter= iterout(1:iter-1,:);
% after convergence produce a final set of estimates using rho-value
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
result.beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*result.beta;
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
u = et - elagt*rho;
result.vare = std(u)^2;
result.meth = 'olsc';
result.rho = rho;
result.iter = iterout(1:iter-1,:);
% % compute t-statistic for rho
% varrho = (1-rho*rho)/(nobs-2);
% result.trho = rho/sqrt(varrho);
(I did not adapt in the last 2 lines the t-test for rho vectors of length p, but this should be straight forward to do..)

Matlab: Optimization of reducing vector size

I have developed a function to reduce the size of an initial vector X = [x,y]. But for an X of 500,000 points and points_limit = 10000, Matlab needs 16sec to complete this function.
Are there any ways to optimize this, maybe by removing the loop using matrix operations (vectorisation)?
function X = reduce_vector_size(X,points_limit)
while length(X) > points_limit
k = 1;
X2 = zeros(round(length(X(:,1))/2),2);
X = sortrows(X);
for i=1:2:length(X(:,1))-1
X2(k,1) = mean([X(i,1) ,X(i+1,1) ]);
X2(k,2) = mean([X(i,2) ,X(i+1,2) ]);
k = k + 1;
end
X = X2;
end
An other best idea is to have a new approach :
Ratio = ceil(length(X(:,1))/points_limit);
X = ceil(X);
X = sortrows(X,1);
X = sortrows(X,2);
X1=[];
for i=1:points_limit - 1
X1 = [X1; mean(X(i*Ratio:(i+1)*Ratio,1)), mean(X(i*Ratio:(i+1)*Ratio,2))];
end
X = X1;
The objective is to reduce the number of points in a vector: a form of compression function for a 2D vectors.
Do you know if I can do this new method with loop ?
What do you think about my algorithm of compression ?
you can easily vectorize the inner for loop:
k = 1;
X = rand(5e5,2);
X2 = zeros(round(length(X(:,1))/2),2);
tic
for i=1:2:length(X(:,1))-1
X2(k,1) = mean([X(i,1) ,X(i+1,1) ]);
X2(k,2) = mean([X(i,2) ,X(i+1,2) ]);
k = k + 1;
end
toc % Elapsed time is 1.988739 seconds.
tic
X3 = (X(1:2:length(X(:,1))-1,:) + X(2:2:length(X(:,1)),:))/2;
toc % Elapsed time is 0.014575 seconds.
isequal(X2,X3) % true

MATLAB: Not Enough Input Arguements

I've attempted to run this code multiple times and have had zero luck since I added in the last for loop. Before the error, the vector k wouldn't update so the vector L was the same number repeated.
I can't figure out why I am getting the 'Not enough input arguments' error when it was working fine beforehand.
Any help would be much appreciated!
% Set up parameters of the functions
omega = 2*pi/10; % 1/s
g = 9.81; % m/s^2
h = 20; % m
parms = [omega, g, h];
% Set up the root finding variables
etol = 1e-6; % convergence criteria
iter = 100; % maximum number of iterations
f = #my_fun; % function pointer to my_func
fp = #my_fprime; % function pointer to my_fprime
k0 = kguess(parms); % initial guess for root
% Find the root
[k, error, n_iterations] = newtraph(f, fp, k0, etol, iter, parms);
% Get the wavelength
if n_iterations < iter
% Converged correctly
L = 2 * pi / k;
else
% Did not converge
disp('ERROR: Maximum number of iterations exceeded')
return
end
wave = load('wavedata.dat');
dt = 0.04; %s
%dh = 0.234; %water depth in meters
wave = wave*.01; %covnverts from meters to cm
nw = wave([926:25501],1);
a = length(nw);
t = 0;
spot = 1;
points = zeros(1,100);
for i = 1:a-1
t=t+dt;
if nw(i) < 0
if nw(i+1) > 0
points(spot)=t;
spot=spot+1;
t=0;
end
end
end
omega = 2*pi./points; %w
l = length(points);
L = zeros(1,509);
k = zeros(1,509);
for j = 1:l
g = 9.81; % m/s^2
h = 0.234; % m
parms = [omega(j), g, h];
% Set up the root finding variables
etol = 1e-6; % convergence criteria
iter = 100; % maximum number of iterations
f = #my_fun; % function pointer to my_func
fp = #my_fprime; % function pointer to my_fprime
k0(j) = kguess(parms); % initial guess for root
% Find the root
[k(j), error, n_iterations] = newtraph(f, fp, k0(j), etol, iter, parms);
% Get the wavelength
if n_iterations < iter
% Converged correctly
L(j) = 2 * pi / k(j);
else
% Did not converge
disp('ERROR: Maximum number of iterations exceeded')
return
end
end
function [ f ] = my_fun(k,parms)
%MY_FUN creates a function handle for linear dispersion
% Detailed explanation goes here
w = parms(1) ;
g = parms(2);
h = parms(3);
f = g*k*tanh(k*h)-(w^2);
end
function [ fp ] = my_fprime(k,parms)
%MY_FPRIME creates a function handle for first derivative of linear
% dispersion.
g = parms(2);
h = parms(3);
% w = 2*pi/10; % 1/s
% g = 9.81; % m/s^2
% h = 20; % m
fp = g*(k*h*((sech(k*h)).^2) + tanh(k*h));
end
function [ k, error, n_iterations ] = newtraph( f, fp, k0, etol, iterA, parms )
%NEWTRAPH Estimates the value of k using the newton raphson method.
if nargin<3,error('at least 3 input arguments required'),end
if nargin<4|isempty(etol),es=etol;end
if nargin<5|isempty(iterA),maxit=iterA;end
iter = 0;
k = k0;
%func =#f;
%dfunc =#fp;
while (1)
xrold = k;
k = k - f(k)/fp(k);
iter = iter + 1;
if k ~= 0, ea = abs((k - xrold)/k) * 100; end
if ea <= etol | iter >= iterA, break, end
end
error = ea;
n_iterations = iter;
end
In function newtraph at line 106 (second line in the while(1) loop), you forgot to pass parms to the function call f:
k = k - f(k)/fp(k);
should become
k = k - f(k,parms)/fp(k,parms);

I get this code for LARS but the variable seems undefined?

I got this code for LARS but when I run, it says undefined X. I can't understand what x is. Why is there an error?
function [beta, A, mu, C, c, gamma] = lars(X, Y, option, t, standardize)
% Least Angle Regression (LAR) algorithm.
% Ref: Efron et. al. (2004) Least angle regression. Annals of Statistics.
% option = 'lar' implements the vanilla LAR algorithm (default);
% option = 'lasso' solves the lasso path with a modified LAR algorithm.
% t -- a vector of increasing positive real numbers. If given, LARS
% returns the solution at t.
%
% Output:
% A -- a sequence of indices that indicate the order of variable
% beta: history of estimated LARS coefficients;
% mu -- history of estimated mean vector;
% C -- history of maximal current absolute corrrelations;
% c -- history of current corrrelations;
% gamma: history of LARS step size.
% Note: history is traced by rows. If t is given, beta is just the
% estimated coefficient vector at the constraint ||beta||_1 = t.
%
% Remarks:
% 1. LARS is originally proposed to estimate a sparse coefficient vector
% a noisy over-determined linear system. LARS outputs estimates for all
% shrinkage/constraint parameters (homotopy).
%
% 2. LARS is well suited for Basis Pursuit (BP) purpose in the real
% automatically terminates when the current correlations for inactive
% all zeros. The recovered coefficient vector is the last column of beta
% with the *lasso* option. Hence, this function provides a fast and
% efficient solution for the ell_1 minimization problem.
% Ref: Donoho and Tsaig (2006). Fast solution of ell_1 norm minimization
if nargin < 5, standardize = true; end
if nargin < 4, t = Inf; end
if nargin < 3, option = 'lar'; end
if strcmpi(option, 'lasso'), lasso = 1; else, lasso = 0; end
eps = 1e-10; % Effective zero
[n,p] = size(X);
if standardize,
X = normalize(X);
Y = Y-mean(Y);
end
m = min(p,n-1); % Maximal number of variables in the final active set
T = length(t);
beta = zeros(1,p);
mu = zeros(n,1); % Mean vector
gamma = []; % LARS step lengths
A = [];
Ac = 1:p;
nVars = 0;
signOK = 1;
i = 0;
mu_old = zeros(n,1);
t_prev = 0;
beta_t = zeros(T,p);
ii = 1;
tt = t;
% LARS loop
while nVars < m,
i = i+1;
c = X'*(Y-mu); % Current correlation
C = max(abs(c)); % Maximal current absolute correlation
if C < eps || isempty(t), break; end % Early stopping criteria
if 1 == i, addVar = find(C==abs(c)); end
if signOK,
A = [A,addVar]; % Add one variable to active set
nVars = nVars+1;
end
s_A = sign(c(A));
Ac = setdiff(1:p,A); % Inactive set
nZeros = length(Ac);
X_A = X(:,A);
G_A = X_A'*X_A; % Gram matrix
invG_A = inv(G_A);
L_A = 1/sqrt(s_A'*invG_A*s_A);
w_A = L_A*invG_A*s_A; % Coefficients of equiangular vector u_A
u_A = X_A*w_A; % Equiangular vector
a = X'*u_A; % Angles between x_j and u_A
beta_tmp = zeros(p,1);
gammaTest = zeros(nZeros,2);
if nVars == m,
gamma(i) = C/L_A; % Move to the least squares projection
else
for j = 1:nZeros,
jj = Ac(j);
gammaTest(j,:) = [(C-c(jj))/(L_A-a(jj)), (C+c(jj))/(L_A+a(jj))];
end
[gamma(i) min_i min_j] = minplus(gammaTest);
addVar = unique(Ac(min_i));
end
beta_tmp(A) = beta(i,A)' + gamma(i)*w_A; % Update coefficient estimates
% Check the sign feasibility of lasso
if lasso,
signOK = 1;
gammaTest = -beta(i,A)'./w_A;
[gamma2 min_i min_j] = minplus(gammaTest);
if gamma2 < gamma(i), % The case when sign consistency gets violated
gamma(i) = gamma2;
beta_tmp(A) = beta(i,A)' + gamma(i)*w_A; % Correct the coefficients
beta_tmp(A(unique(min_i))) = 0;
A(unique(min_i)) = []; % Delete the zero-crossing variable (keep the ordering)
nVars = nVars-1;
signOK = 0;
end
end
if Inf ~= t(1),
t_now = norm(beta_tmp(A),1);
if t_prev < t(1) && t_now >= t(1),
beta_t(ii,A) = beta(i,A) + L_A*(t(1)-t_prev)*w_A'; % Compute coefficient estimates corresponding to a specific t
t(1) = [];
ii = ii+1;
end
t_prev = t_now;
end
mu = mu_old + gamma(i)*u_A; % Update mean vector
mu_old = mu;
beta = [beta; beta_tmp'];
end
if 1 < ii,
noCons = (tt > norm(beta_tmp,1));
if 0 < sum(noCons),
beta_t(noCons,:) = repmat(beta_tmp',sum(noCons),1);
end
beta = beta_t;
end
% Normalize columns of X to have mean zero and length one.
function sX = normalize(X)
[n,p] = size(X);
sX = X-repmat(mean(X),n,1);
sX = sX*diag(1./sqrt(ones(1,n)*sX.^2));
% Find the minimum and its index over the (strictly) positive part of X
% matrix
function [m, I, J] = minplus(X)
% Remove complex elements and reset to Inf
[I,J] = find(0~=imag(X));
for i = 1:length(I),
X(I(i),J(i)) = Inf;
end
X(X<=0) = Inf;
m = min(min(X));
[I,J] = find(X==m);
You can have more information in the related paper:
Efron, Bradley; Hastie, Trevor; Johnstone, Iain; Tibshirani, Robert. Least angle regression. Ann. Statist. 32 (2004), no. 2, 407--499. doi:10.1214/009053604000000067.
http://projecteuclid.org/euclid.aos/1083178935.

Matlab error: Undefined function 'dmod' for input arguments of type 'double'. How come there is no dmod function as it should come with Matlab?

I have been working on some exam and need to do an exercise involving some frequency shift keying. That is why I use dmod function from Matlab - it comes with Matlab. But as I write in my console
yfsk=dmod([1 0], 3, 0.5, 100,'fsk', 2, 1);
it gives me this
`Undefined function 'dmod' for input arguments of type 'double'.
I have also tried doc dmod and it says 'page not found' in matlab help window.
Do you know wheter this is because I didn't install all the matlab packages or this function is not suported with matlab 2012a?
Thank you
This is gonna be helpful:
function [y, t] = dmod(x, Fc, Fd, Fs, method, M, opt2, opt3)
%DMOD
%
%WARNING: This is an obsolete function and may be removed in the future.
% Please use MODEM.PAMMOD, MODEM.QAMMOD, MODEM.GENQAMMOD, FSKMOD,
% MODEM.PSKMOD, or MODEM.MSKMOD instead.
% Y = DMOD(X, Fc, Fd, Fs, METHOD...) modulates the message signal X
% with carrier frequency Fc (Hz) and symbol frequency Fd (Hz). The
% sample frequency of Y is Fs (Hz), where Fs > Fc and where Fs/Fd is
% a positive integer. For information about METHOD and subsequent
% parameters, and about using a specific modulation technique,
% type one of these commands at the MATLAB prompt:
%
% FOR DETAILS, TYPE MODULATION TECHNIQUE
% dmod ask % M-ary amplitude shift keying modulation
% dmod psk % M-ary phase shift keying modulation
% dmod qask % M-ary quadrature amplitude shift keying
% % modulation
% dmod fsk % M-ary frequency shift keying modulation
% dmod msk % Minimum shift keying modulation
%
% For baseband simulation, use DMODCE. To plot signal constellations,
% use MODMAP.
%
% See also DDEMOD, DMODCE, DDEMODCE, MODMAP, AMOD, ADEMOD.
% Copyright 1996-2007 The MathWorks, Inc.
% $Revision: 1.1.6.5 $ $Date: 2007/06/08 15:53:47 $
warnobsolete(mfilename, 'Please use MODEM.PAMMOD, MODEM.QAMMOD, MODEM.GENQAMMOD, FSKMOD, MODEM.PSKMOD, or MODEM.MSKMOD instead.');
swqaskenco = warning('off', 'comm:obsolete:qaskenco');
swapkconst = warning('off', 'comm:obsolete:apkconst');
swmodmap = warning('off', 'comm:obsolete:modmap');
swamod = warning('off', 'comm:obsolete:amod');
opt_pos = 6; % position of 1st optional parameter
if nargout > 0
y = []; t = [];
end
if nargin < 1
feval('help','dmod')
return;
elseif isstr(x)
method = lower(deblank(x));
if length(method) < 3
error('Invalid method option for DMOD.')
end
if nargin == 1
% help lines for individual modulation method.
addition = 'See also DDEMOD, DMODCE, DDEMODCE, MODMAP, AMOD, ADEMOD.';
if method(1:3) == 'qas'
callhelp('dmod.hlp', method(1:4), addition);
else
callhelp('dmod.hlp', method(1:3), addition);
end
else
% plot constellation, make a shift.
opt_pos = opt_pos - 3;
M = Fc;
if nargin >= opt_pos
opt2 = Fd;
else
modmap(method, M);
return;
end
if nargin >= opt_pos+1
opt3 = Fs;
else
modmap(method, M, opt2);
return;
end
modmap(method, M, opt2, opt3); % plot constellation
end
return;
end
if (nargin < 4)
error('Usage: Y = DMOD(X, Fc, Fd, Fs, METHOD, OPT1, OPT2, OPT3) for passband modulation');
elseif nargin < opt_pos-1
method = 'samp';
else
method = lower(method);
end
len_x = length(x);
if length(Fs) > 1
ini_phase = Fs(2);
Fs = Fs(1);
else
ini_phase = 0; % default initial phase
end
if ~isfinite(Fs) | ~isreal(Fs) | Fs<=0
error('Fs must be a positive number.');
elseif length(Fd)~=1 | ~isfinite(Fd) | ~isreal(Fd) | Fd<=0
error('Fd must be a positive number.');
else
FsDFd = Fs/Fd; % oversampling rate
if ceil(FsDFd) ~= FsDFd
error('Fs/Fd must be a positive integer.');
end
end
if length(Fc) ~= 1 | ~isfinite(Fc) | ~isreal(Fc) | Fc <= 0
error('Fc must be a positive number. For baseband modulation, use DMODCE.');
elseif Fs/Fc < 2
warning('Fs/Fc must be much larger than 2 for accurate simulation.');
end
% determine M
if isempty(findstr(method, '/arb')) & isempty(findstr(method, '/cir'))
if nargin < opt_pos
M = max(max(x)) + 1;
M = 2^(ceil(log(M)/log(2)));
M = max(2, M);
elseif length(M) ~= 1 | ~isfinite(M) | ~isreal(M) | M <= 0 | ceil(M) ~= M
error('Alphabet size M must be a positive integer.');
end
end
if isempty(x)
y = [];
return;
end
[r, c] = size(x);
if r == 1
x = x(:);
len_x = c;
else
len_x = r;
end
% expand x from Fd to Fs.
if isempty(findstr(method, '/nomap'))
if ~isreal(x) | all(ceil(x)~=x)
error('Elements of input X must be integers in [0, M-1].');
end
yy = [];
for i = 1 : size(x, 2)
tmp = x(:, ones(1, FsDFd)*i)';
yy = [yy tmp(:)];
end
x = yy;
clear yy tmp;
end
if strncmpi(method, 'ask', 3)
if isempty(findstr(method, '/nomap'))
% --- Check that the data does not exceed the limits defined by M
if (min(min(x)) < 0) | (max(max(x)) > (M-1))
error('An element in input X is outside the permitted range.');
end
y = (x - (M - 1) / 2 ) * 2 / (M - 1);
else
y = x;
end
[y, t] = amod(y, Fc, [Fs, ini_phase], 'amdsb-sc');
elseif strncmpi(method, 'fsk', 3)
if nargin < opt_pos + 1
Tone = Fd;
else
Tone = opt2;
end
if (min(min(x)) < 0) | (max(max(x)) > (M-1))
error('An element in input X is outside the permitted range.');
end
[len_y, wid_y] = size(x);
t = (0:1/Fs:((len_y-1)/Fs))'; % column vector with all the time samples
t = t(:, ones(1, wid_y)); % replicate time vector for multi-channel operation
osc_freqs = pi*[-(M-1):2:(M-1)]*Tone;
osc_output = (0:1/Fs:((len_y-1)/Fs))'*osc_freqs;
mod_phase = zeros(size(x))+ini_phase;
for index = 1:M
mod_phase = mod_phase + (osc_output(:,index)*ones(1,wid_y)).*(x==index-1);
end
y = cos(2*pi*Fc*t+mod_phase);
elseif strncmpi(method, 'psk', 3)
% PSK is a special case of QASK.
[len_y, wid_y] = size(x);
t = (0:1/Fs:((len_y-1)/Fs))';
if findstr(method, '/nomap')
y = dmod(x, Fc, Fs, [Fs, ini_phase], 'qask/cir/nomap', M);
else
y = dmod(x, Fc, Fs, [Fs, ini_phase], 'qask/cir', M);
end
elseif strncmpi(method, 'msk', 3)
M = 2;
Tone = Fd/2;
if isempty(findstr(method, '/nomap'))
% Check that the data is binary
if (min(min(x)) < 0) | (max(max(x)) > (1))
error('An element in input X is outside the permitted range.');
end
x = (x-1/2) * Tone;
end
[len_y, wid_y] = size(x);
t = (0:1/Fs:((len_y-1)/Fs))'; % column vector with all the time samples
t = t(:, ones(1, wid_y)); % replicate time vector for multi-channel operation
x = 2 * pi * x / Fs; % scale the input frequency vector by the sampling frequency to find the incremental phase
x = [0; x(1:end-1)];
y = cos(2*pi*Fc*t+cumsum(x)+ini_phase);
elseif (strncmpi(method, 'qask', 4) | strncmpi(method, 'qam', 3) |...
strncmpi(method, 'qsk', 3) )
if findstr(method,'nomap')
[y, t] = amod(x, Fc, [Fs, ini_phase], 'qam');
else
if findstr(method, '/ar') % arbitrary constellation
if nargin < opt_pos + 1
error('Incorrect format for METHOD=''qask/arbitrary''.');
end
I = M;
Q = opt2;
M = length(I);
% leave to the end for processing
CMPLEX = I + j*Q;
elseif findstr(method, '/ci') % circular constellation
if nargin < opt_pos
error('Incorrect format for METHOD=''qask/circle''.');
end
NIC = M;
M = length(NIC);
if nargin < opt_pos+1
AIC = [1 : M];
else
AIC = opt2;
end
if nargin < opt_pos + 2
PIC = NIC * 0;
else
PIC = opt3;
end
CMPLEX = apkconst(NIC, AIC, PIC);
M = sum(NIC);
else % square constellation
[I, Q] = qaskenco(M);
CMPLEX = I + j * Q;
end
y = [];
x = x + 1;
% --- Check that the data does not exceed the limits defined by M
if (min(min(x)) < 1) | (max(max(x)) > M)
error('An element in input X is outside the permitted range.');
end
for i = 1 : size(x, 2)
tmp = CMPLEX(x(:, i));
y = [y tmp(:)];
end
ind_y = [1: size(y, 2)]';
ind_y = [ind_y, ind_y+size(y, 2)]';
ind_y = ind_y(:);
y = [real(y) imag(y)];
y = y(:, ind_y);
[y, t] = amod(y, Fc, [Fs, ini_phase], 'qam');
end
elseif strncmpi(method, 'samp', 4)
% This is for converting an input signal from sampling frequency Fd
% to sampling frequency Fs.
[len_y, wid_y] = size(x);
t = (0:1/Fs:((len_y-1)/Fs))';
y = x;
else % invalid method
error(sprintf(['You have used an invalid method.\n',...
'The method should be one of the following strings:\n',...
'\t''ask'' Amplitude shift keying modulation;\n',...
'\t''psk'' Phase shift keying modulation;\n',...
'\t''qask'' Quadrature amplitude shift-keying modulation, square constellation;\n',...
'\t''qask/cir'' Quadrature amplitude shift-keying modulation, circle constellation;\n',...
'\t''qask/arb'' Quadrature amplitude shift-keying modulation, user defined constellation;\n',...
'\t''fsk'' Frequency shift keying modulation;\n',...
'\t''msk'' Minimum shift keying modulation.']));
end
if r==1 & ~isempty(y)
y = y.';
end
warning(swqaskenco);
warning(swapkconst);
warning(swmodmap);
warning(swamod);
% [EOF]
I do believe that dmod function (Communication Toolbox) has been abandoned in the latest MATLAB releases.
It looks like dmod is no more :-)
Here is the link that says it was removed:
http://www.mathworks.com/help/comm/release-notes.html?searchHighlight=dmod
It should be replaced wit with comm.FSKModulator System object
I think you may be looking for the demodulation function (demod) from the signal processing toolbox.
http://www.mathworks.com.au/help/signal/ref/demod.html