I'm trying very hard to understand this code about a Deep Learning-Based NOMA system based in MATLAB. I am really new to MATLAB coding but I really need to understand this entire code as it will help in my school project and I am struggling.
I think as of right now I do not need to know how the mathematical formulas work, but instead, the focus is on what the code is doing and its flow.
This is part of the code in the trainData.m file that I am struggling with right now
Why are the pilot symbols calculated and then replaced right after?
Why is the idx_sc (20) selected to be replaced? What is its significance? Is it the only subcarrier selected for the training of the DL model? Why only that?
This portion of the code in the picture is labeled "generate training data for each class". From my understanding, it is generating OFDM packets for each label, simulating the transmission and reception, and then getting the features and labels for each of the 16 classes. Is that correct?
The code and all relevant function files can be found in the link below.
Please help me understand the code!!! Please! Much thanks!
https://www.mathworks.com/matlabcentral/fileexchange/75478-deep-learning-for-signal-detection-in-noma-systems
To get you started, In lines 91 the code initializes the entire variable as 0. Subsequent lines (92-96) are just replacing pieces of the variable based on the indexing inside the “(…)”
I have been researching on how to program image processing for counting objects and I found the following homepage about Matlab for cell counting
I am not familiar with Matlab, but found their ideas interesting, so reading this, I see that they are using a BlobAnalysis Object to find the centroid of the segmented cells (You can see the image :
Now, this seem very interesting but I wonder what this operation is doing exactly? (please don't give me the definition written in the docs-"it is finding the property in the blobs". How? is it segmenting it? separating it? ) As far as I can see either you separate the cells first (that is the whole point of counting- I am doing this in other program using watersheding) and then finding the centroids is just something added and not so important, OR somehow this blob analysis is doing some interesting segmentation itself that I would like to know.
Anyone familiar with this can give me some pointers or advice here?
Sorry about this noob question, because I never work with matlab and signal processing before.
Here is what I want to do: I have a fixed length of byte array X, now I want to encode it to a sound file, I also want this process to be reversible, which means the sound can be converted back to X with no error. I searched online, and found the following code:
M = 16;
x = randint(5000,1,M);
y=modulate(modem.qammod(M),x);
My question is that, is QAM the best way to do this? and how to use it? A little bit code example will be really appreciated, Thank you!
update#1: I tried to output y by sound(y), but matlab does not allow me to do so, it says I can only output floating numbers. How can I solve this? Thank you!
If you need to transmit over the air, you have quiet a lot of work in front of you I think. The most difficult problem to solve in a telecommunications system is often synchronization, meaning that your receiver will have to know where the QAM symbols are placed in time. This is not easy. If you choose to go ahead I agree with mtrw that you should try dsp.stackexchange.com.
Try for example to imaging a simple modulation scheme where each bit is converted to a short piece of sine with the frequency depending on whether the bit is one or zero. How would you go about decoding this on the receiver end? You need to detect the onset of the first bit and have some self maintaining clock running for synchronization on the receiver to find bits in case they do not change, aka a PLL (Phase Locked Loop). This could possibly be made easier by using manchester coding, but you would still have to do quite a lot to get it running.
As you see, there are no easy solutions when you leave the save Matlab harbor :-)
Best regards
I just read a fascinating paper: http://www.psy.cmu.edu/~ckemp/Papers/kempt08.pdf
In my opinion it takes the whole area of machine learning to a completely new level because it flexibly discovers the structure of data (and doesn't only try to find a best fit for an existing structure).
The code is also available: http://www.psy.cmu.edu/~ckemp/code/formdiscovery.html
I tried to do a few experiments of my own - but unfortunately I don't possess Matlab. I tried it with Octave but it only produced all kinds of error messages, which I don't understand (I am no expert on these programs).
Could anybody perhaps have a quick look if these problems can be solved easily (or at all)? Perhaps the solution will be an easy one (this is my hope after all).
This would really be a big help! I am very much looking forward to trying a few data sets of my own.
I didn't look into the source code, but if you are going to convert it, these links might help:
Porting programs from Matlab to Octave
Differences between Octave and MATLAB
I was recently looking for an answer to the same question and found this old post. Just to add my two cents for anyone else looking...
There is an Octave library called Missing Function Library that "Finds functions that are in Matlab but not in Octave". Also, there are a bunch of great packages (read "Toolboxes") for Octave as well on SourceForge. Hope this helps!
Although many of you will have a decent idea of what I'm aiming at, just from reading the title -- allow me a simple introduction still.
I have a Fortran program - it consists of a program, some internal subroutines, 7 modules with its own procedures, and ... uhmm, that's it.
Without going into much detail, for I don't think it's necessary at this point, what would be the easiest way to use MATLAB's plotting features (mainly plot(x,y) with some customizations) as an interactive part of my program ? For now I'm using some of my own custom plotting routines (based on HPGL and Calcomp's routines), but just as part of an exercise on my part, I'd like to see where this could go and how would it work (is it even possible what I'm suggesting?). Also, how much effort would it take on my part ?
I know this subject has been rather extensively described in many "tutorials" on the net, but for some reason I have trouble finding the really simple yet illustrative introductory ones. So if anyone can post an example or two, simple ones, I'd be really grateful. Or just take me by the hand and guide me through one working example.
platform: IVF 11.something :) on Win XP SP2, Matlab 2008b
The easiest way would be to have your Fortran program write to file, and have your Matlab program read those files for the information you want to plot. I do most of my number-crunching on Linux, so I'm not entirely sure how Windows handles one process writing a file and another reading it at the same time.
That's a bit of a kludge though, so you might want to think about using Matlab to call the Fortran program (or parts of it) and get data directly for plotting. In this case you'll want to investigate Creating Fortran MEX Files in the Matlab documentation. This is relatively straightforward to do and would serve your needs if you were happy to use Matlab to drive the process and Fortran to act as a compute service. I'd look in the examples distributed with Matlab for simple Fortran MEX files.
Finally, you could call Matlab from your Fortran program, search the documentation for Calling the Matlab Engine. It's a little more difficult for me to see how this might fit your needs, and it's not something I'm terribly familiar with.
If you post again with more detail I may be able to provide more specific tips, but you should probably start rolling your sleeves up and diving in to MEX files.
Continuing the discussion of DISLIN as a solution, with an answer that won't fit into a comment...
#M. S. B. - hello. I apologize for writing in your answer, but these comments are much too short, and answering a question in the form of an answer with an answer is ... anyway ...
There is the Quick Plot feature of DISLIN -- routine QPLOT needs only three arguments to plot a curve: X array, Y array and number N. See Chapter 16 of the manual. Plus only several additional calls to select output device and label the axes. I haven't used this, so I don't know how good the auto-scaling is.
Yes, I know of Quickplot, and it's related routines, but it is too fixed for my needs (cannot change anything), and yes, it's autoscaling is somewhat quircky. Also, too big margins inside the graf.
Or if you want to use the power of GRAF to setup your graph box, there is subroutine GAXPAR to automatically generate recommended values. -2 as the first argument to LABDIG automatically determines the number of digits in tick-mark labels.
Have you tried the routines?
Sorry, I cannot find the GAXPAR routine you're reffering to in dislin's index. Are you sure it is called exactly like that ?
Reply by M.S.B.: Yes, I am sure about the spelling of GAXPAR. It is the last routine in Chapter 4 of the DISLIN 9.5 PDF manual. Perhaps it is a new routine? Also there is another path to automatic scaling: SETSCL -- see Chapter 6.
So far, what I've been doing (apart from some "duck tape" solutions) is
use dislin; implicit none
real, dimension(5) :: &
x = [.5, 2., 3., 4., 5.], &
y = [10., 22., 34., 43., 15.]
real :: xa, xe, xor, xstp, &
ya, ye, yor, ystp
call setpag('da4p'); call metafl('xwin');
call disini(); call winkey('return');
call setscl(x,size(x),'x');
call setscl(y,size(y),'y')
call axslen(1680,2376) !(8/10)*2100 and 2970, respectively
call setgrf('name','name','line','line')
call incmrk(1); call hsymbl(3);
call graf(xa, xe, xor, xstp, ya, ye, yor, ystp); call curve(x,y,size(x))
call disfin()
end
which will put the extreme values right on the axis. Do you know perhaps how could I go to have one "major tick margin" on the outside, as to put some area between the curve and the axis (while still keeping setscl's effects) ?
Even if you don't like the built-in auto-scaling, if you are already using DISLIN, rolling your own auto-scaling will be easier than calling Fortran from MATLAB. You can use the Fortran intrinsic functions minval and maxval to find the smallest and largest values in the data, than write a subroutine to round outwards to "nice" round values. Similarly, a subroutine to decide on the tick-mark spacing.
This is actually not so easy to accomplish (and ideas to prove me wrong will be gladly appreciated). Or should I say, it is easy if you know the rough range in which your values will lie. But if you don't, and you don't know
whether your values will lie in the range of 13-34 or in the 1330-3440, then ...
... if I'm on the wrong track completely here, please, explain if you ment something different. My english is somewhat lacking, so I can only hope the above is understandable.
Inside a subroutine to determine round graph start/end values, you could scale the actual min/max values to always be between 1 and 10, then have a table to pick nice round values, then unscale back to the correct range.
--
Dump Matlab because its proprietary, expensive, bloated/slow and codes are not easy to parallelize.
What you should do is use something on the lines of DISLIN, PLplot, GINO, gnuplotfortran etc.