I'm trying to simulate QAM with arbitrary symbol mapping.
Matlab help page says I just need to use the format: y =qammod(x,M,symOrder).
M=16;k = log2(M);n = 10000;dataIn = randi([0 1],n,1);
dataInMatrix = reshape(dataIn,length(dataIn)/k,k);
dataSymbolsIn = bi2de(dataInMatrix);
vec = randperm(16)-1;
dataMod = qammod(dataSymbolsIn,M,0,vec);
Matlab returns error 'Invalid symbol set ordering.'
I'm wondering if this is a problem with Matlab version since help page is for 2016a but school computer has 2015b.
Related
I'm using the following code of "Computed tomography-based volumetric tool for standardized measurement of the maxillary sinus" article to measure the volume of the maxillary sinus in DCOM images:
% Read images
clear all
close all
[filename, pathname] = uigetfile('*','Select CT exam (all slices)','MultiSelect','on');
num= length(filename);
bbbb=1;
step=input('Step of image reading: \')
for aaaa = 1:step:num
xinfo=dicominfo([pathname,char(filename(aaaa))]);
pxsp=cat(2,xinfo.PixelSpacing);
x=dicomread([pathname,char(filename(aaaa))])+cat(2,xinfo.RescaleIntercept);
k=x;
k = im2bw(k,0.49);
k = imfill(k,'holes');
cc = bwconncomp(k);
stats = regionprops(cc,'Area');
A = [stats.Area];
[~,biggest] = max(A);
k(labelmatrix(cc)~=biggest) = 0;
x(k~=1)=-2000;
masccranio(:,:,bbbb)=k;
cranio(:,:,bbbb)=x;
cranio_full(:,:,bbbb)=x;
bbbb=bbbb+1;
end
at the first, we don't have any idea about the reading step input at the beginning, please help on that if you can. Our second problem is when we run the code we get the following error:
Error using ~=
Matrix dimensions must agree.
Error in Quant (line 32)
k(labelmatrix(cc)~=biggest) = 0;
I'm using Matlab 2019b and as I know this code is for 2013. Any help is appreciated.
I expect you want biggest to represent the biggest value of all region areas in this line:
k(labelmatrix(cc)~=biggest) = 0;
In that case, your problem is likely that A is not a vector, and therefore biggest is not a scalar. This causes the operation to fail as the ~= operator only works on matrices of the same size or with scalars.
I want to study LDPC and I want to simulate a program. This program will use LDPC on a randomly generated 1x32000 size binary array, then it will modulate with 16-QAM, add noise for SNR=20dB, do demodulation for 16-QAM and finally decode it using LDPC. When I run the program and check it for BER, I get error around %90 which is definitely not correct. Can you help me?
clear all
clc
M = 16;
SNR = 20;
ldpcEncoder = comm.LDPCEncoder(dvbs2ldpc(1/2));
ldpcDecoder = comm.LDPCDecoder(dvbs2ldpc(1/2));
data = randi([0 1],32400,1);
newData = ldpcEncoder(data);
a = qammod(newData,M,'InputType','bit');
b = awgn(a,SNR,'measured');
c = qamdemod(b,M,'OutputType','bit');
result = ldpcDecoder(c);
error = biterr(data,result)/length(data)
The LDPC decoder object expects an input with "soft" bits (log-likelihood ratio), whereas you are feeding it with "hard", unipolar bits. So, replace the line
c = qamdemod(b,M,'OutputType','bit');
by
c = qamdemod(b,M,'OutputType','llr');
After having some basics understanding of GPML toolbox , I written my first code using these tools. I have a data matrix namely data consist of two array values of total size 1000. I want to use this matrix to estimate the GP value using GPML toolbox. I have written my code as follows :
x = data(1:200,1); %training inputs
Y = data(1:201,2); %, training targets
Ys = data(201:400,2);
Xs = data(201:400,1); %possibly test cases
covfunc = {#covSE, 3};
ell = 1/4; sf = 1;
hyp.cov = log([ell; sf]);
likfunc = #likGauss;
sn = 0.1;
hyp.lik = log(sn);
[ymu ys2 fmu fs2] = gp(hyp, #infExact, [], covfunc, likfunc,X,Y,Xs,Ys);
plot(Xs, fmu);
But when I am running this code getting error 'After having some basics understanding of GPML toolbox , I written my first code using these tools. I have a data matrix namely data consist of two array values of total size 1000. I want to use this matrix to estimate the GP value using GPML toolbox. I have written my code as follows :
x = data(1:200,1); %training inputs
Y = data(1:201,2); %, training targets
Ys = data(201:400,2);
Xs = data(201:400,1); %possibly test cases
covfunc = {#covSE, 3};
ell = 1/4; sf = 1;
hyp.cov = log([ell; sf]);
likfunc = #likGauss;
sn = 0.1;
hyp.lik = log(sn);
[ymu ys2 fmu fs2] = gp(hyp, #infExact, [], covfunc, likfunc,X,Y,Xs,Ys);
plot(Xs, fmu);
But when I am running this code getting:
Error using covMaha (line 58) Parameter mode is either 'eye', 'iso',
'ard', 'proj', 'fact', or 'vlen'
Please if possible help me to figure out where I am making mistake ?
I know this is way late, but I just ran into this myself. The way to fix it is to change
covfunc = {#covSE, 3};
to something like
covfunc = {#covSE, 'iso'};
It doesn't have to be 'iso', it can be any of the options listed in the error message. Just make sure your hyperparameters are set correctly for the specific mode you choose. This is detailed more in the covMaha.m file in GPML.
I have a feature vector of 13 dimensions for m samples. I am trying to find k nearest neighbors of each sample.
I have selected the feature vector required.
[m,~] = size(featurevector);
X = [];
Y = [];
for i = 1: m-1
if (featurevector(i,14) == 3)
X = [X;featurevector(i,1:13)];
Y = [Y;featurevector(i,1:13)];
end
end
I have tried calculating the distances for each point
d = pdist2(X,Y,'euclidean');
It works fine till here. Now I wanted to find the k = 3 nearest neighbor indexes of each and every sample, so it tried
[idx,dist] = knnsearch(X,Y,'k',3,'distance','euclidean');
but it shows an error
Error using pdist2
Too many input arguments.
Error in ExhaustiveSearcher/knnsearch (line 207)
[dist,idx] = pdist2(obj.X,Y, distMetric, arg{:}, 'smallest',numNN);
Error in knnsearch (line 144)
[idx, dist] = knnsearch(O,Y,'k',numNN, 'includeties',includeTies);
I have tried with example that mentioned in help . it works fine and I am not able to do when number of samples = 630.
Did my coding went wrong ?
I am using Matlab 2015a
my paths on command
which -all pdist2
gives
c:\toolbox\classify\pdist2.m C:\Program Files (x86)\MATLAB\MATLAB Production Server\R2015a\toolbox\stats\stats\pdist2.m % Shadowed
any help appreciated !
I have trained a neural network in Matlab (Using the neural network toolbox). Now I would like to export the calculated weights and biases to another platform (PHP) in order to make calculations with them. Is there a way to create a function or equation to do this?
I found this related question: Equation that compute a Neural Network in Matlab.
Is there a way to do what I want and port the results of my NN (29 inputs, 10 hidden layers, 1 output) to PHP?
Yes, the net properties also referenced in the other question are simple matrices:
W1=net.IW{1,1};
W2=net.LW{2,1};
b1=net.b{1,1};
b2=net.b{2,1};
So you can write them to a file, say, as comma-separated-values.
csvwrite('W1.csv',W1)
Then, in PHP read this data and convert or use it as you like.
<?php
if (($handle = fopen("test.csv", "r")) !== FALSE) {
$data = fgetcsv($handle, 1000, ",");
}
?>
Than, to process the weights, you can use the formula from the other question by replacing the tansig function, which is calculated according to:
n = 2/(1+exp(-2*n))-1
This is mathematically equivalent to tanh(N)
Which exists in php as well.
source: http://dali.feld.cvut.cz/ucebna/matlab/toolbox/nnet/tansig.html
Transferring all of these is pretty trivial. You will need:
Write the code for matrix multiplication, which are a pretty simple couple of for loops.
Second, observe that according to the Matlab documentation tansig(n) = 2/(1+exp(-2*n))-1. I'm pretty sure that PHP has exp (and if not, it is has a pretty simple polynomial expansion which you can write yourself)
Read, understand and apply Jasper van den Bosch's excellent answer to your question.
Hence the solution becomes (after correcting all wrong parts)
Here I am giving a solution in Matlab, but if you have tanh() function, you may easily convert it to any programming language. For PHP, tanh() function exists: php tanh(). It is for just showing the fields from network object and the operations you need.
Assume you have a trained ann (network object) that you want to export
Assume that the name of the trained ann is trained_ann
Here is the script for exporting and testing.
Testing script compares original network result with my_ann_evaluation() result
% Export IT
exported_ann_structure = my_ann_exporter(trained_ann);
% Run and Compare
% Works only for single INPUT vector
% Please extend it to MATRIX version by yourself
input = [12 3 5 100];
res1 = trained_ann(input')';
res2 = my_ann_evaluation(exported_ann_structure, input')';
where you need the following two functions
First my_ann_exporter:
function [ my_ann_structure ] = my_ann_exporter(trained_netw)
% Just for extracting as Structure object
my_ann_structure.input_ymax = trained_netw.inputs{1}.processSettings{1}.ymax;
my_ann_structure.input_ymin = trained_netw.inputs{1}.processSettings{1}.ymin;
my_ann_structure.input_xmax = trained_netw.inputs{1}.processSettings{1}.xmax;
my_ann_structure.input_xmin = trained_netw.inputs{1}.processSettings{1}.xmin;
my_ann_structure.IW = trained_netw.IW{1};
my_ann_structure.b1 = trained_netw.b{1};
my_ann_structure.LW = trained_netw.LW{2};
my_ann_structure.b2 = trained_netw.b{2};
my_ann_structure.output_ymax = trained_netw.outputs{2}.processSettings{1}.ymax;
my_ann_structure.output_ymin = trained_netw.outputs{2}.processSettings{1}.ymin;
my_ann_structure.output_xmax = trained_netw.outputs{2}.processSettings{1}.xmax;
my_ann_structure.output_xmin = trained_netw.outputs{2}.processSettings{1}.xmin;
end
Second my_ann_evaluation:
function [ res ] = my_ann_evaluation(my_ann_structure, input)
% Works with only single INPUT vector
% Matrix version can be implemented
ymax = my_ann_structure.input_ymax;
ymin = my_ann_structure.input_ymin;
xmax = my_ann_structure.input_xmax;
xmin = my_ann_structure.input_xmin;
input_preprocessed = (ymax-ymin) * (input-xmin) ./ (xmax-xmin) + ymin;
% Pass it through the ANN matrix multiplication
y1 = tanh(my_ann_structure.IW * input_preprocessed + my_ann_structure.b1);
y2 = my_ann_structure.LW * y1 + my_ann_structure.b2;
ymax = my_ann_structure.output_ymax;
ymin = my_ann_structure.output_ymin;
xmax = my_ann_structure.output_xmax;
xmin = my_ann_structure.output_xmin;
res = (y2-ymin) .* (xmax-xmin) /(ymax-ymin) + xmin;
end