Related to SOM Tool Box - matlab

I need a help related to som tool box. Here is the question.
"we have 100 input records(which belongs to two classes) how do we know from SOMToolbox output whether a particular input say number 35 is clustered as class 1 or class 2, can we finalize it from u-matrix or is there any other file , we want to final the accuracy from the SOM output"
please help me if you know the answer and waiting for your suggestions

You can specify the no of data to be trained, or you can stop the training after your limit. One can see the changes on mesh live as the data are being processed in som-toolbox in matlab.

Related

Understanding a code for deep learning NOMA system in MATLAB

I'm trying very hard to understand this code about a Deep Learning-Based NOMA system based in MATLAB. I am really new to MATLAB coding but I really need to understand this entire code as it will help in my school project and I am struggling.
I think as of right now I do not need to know how the mathematical formulas work, but instead, the focus is on what the code is doing and its flow.
This is part of the code in the trainData.m file that I am struggling with right now
Why are the pilot symbols calculated and then replaced right after?
Why is the idx_sc (20) selected to be replaced? What is its significance? Is it the only subcarrier selected for the training of the DL model? Why only that?
This portion of the code in the picture is labeled "generate training data for each class". From my understanding, it is generating OFDM packets for each label, simulating the transmission and reception, and then getting the features and labels for each of the 16 classes. Is that correct?
The code and all relevant function files can be found in the link below.
Please help me understand the code!!! Please! Much thanks!
https://www.mathworks.com/matlabcentral/fileexchange/75478-deep-learning-for-signal-detection-in-noma-systems
To get you started, In lines 91 the code initializes the entire variable as 0. Subsequent lines (92-96) are just replacing pieces of the variable based on the indexing inside the “(…)”

Circular System, how to get numbers back into stock 1

I am creating a system dynamic and agent-based model for my dissertation.
Numbers generated through the different flows must be added back to the start to continue through the process.
For example, numbers flow from a parameter to stock 1, which goes through a flow process at a specific rate to stock 2. From stock 2, there is another flow process based on a particular rate to stock 3. The numbers from stock 3 need to go back into stock 1 to repeat the process.
Methods I have tried have been adding flows, links, and changing the initial value of stock 1.
Any help or suggestions are greatly appreciated!
Updated:
Added screenshots.
I think it is because of the difference between the two flows, e.g. a -9 based on the difference between flow and flow3 as shown in the screenshot.
Screenshots:
Graph of Stock 1
Model as a whole
In system dynamics, if you want to have a circular system (feedback loop) it needs to contain as a minimum 1 stock inside the loop, which means that there is at least 1 delay in the feedback loop
I will explain your model
stock has an outflow of 20 (flow), and an inflow of 11 (flow3)... this produces a net outflow of 9/timeUnit
This is what you see in the graph... and it doesn't even matter if your system is circular or not, that stock will lose 9/timeUnit forever.
Your wording is very strange when you say "numbers are generated and numbers are flowing"... it's not numbers that flow through your system... you can't really say "i have 3 numbers per minute flowing into a pool of numbers"
In your model, the "numbers" are definitely going back to the initial stock, but system dynamics is like water flowing, it is not a discrete paradigm, so you will not see the same "numbers" going back because system dynamics doesn't differentiate what individual "numbers" are flowing.
It's so weird already to have to use the word numbers to be consistent with your question.
So in order to have a better answer, you will need to specify:
what is the behavior you see in here
what is the behavior you expect, and how it differs from what you see
what would you need to see in the system in order to say "yes, my system in working exactly as expected"
It would help if you let us know what your system represents, and if you use names that represents what is flowing through the system (instead of using numbers, stock and flow, because any explanation becomes confusing)

Neural network to figure out the meaning of user input

I'm trying to make a neural network try to figure out the meaning of input(keyboard keys in this case) according to the user.
I have multiple possible output "commands" that the NN can interpret the inputs to mean, and at each state certain outputs can count as beneficial while others are a detriment
When the NN starts up for the first time, no input should have any particular meaning to it but as time goes on I want the NN to be able to figure out what the user most likely meant.
I've tried a Multilayer perceptron NN that has as many input nodes as there are physical inputs and as many output nodes as there are commands and a number of nodes equal to the sum of the other two layers as a single hidden layer, in this case it is then a 5,15,10.
The NN assumes that the user will only make moves that are in the NN's best interest.
So far it seems the NN is just figuring out what is the command it can take that will most likely result in a beneficial move, regardless of the input key rather than trying to figure out what key should result in what move according to the user.
Because of this I'm wondering (most likely wrong) if I should produce a separate NN for each input to try and figure out the current output according to the user.
Is there a different type of NN I should look into that will work better, and is there a recommended configuration for this problem?
I'll be happy with some recommendations of reading material that would help in this particular problem.
I'm at best an amateur in NN and would like to learn a lot more about the whole field, But I'm trying to focus my efforts on this problem for now.
Accordng to me you want the output to be according to behaviour of the player as number of inputs are more than in actual case. So according to me there should be some type of memory for the actions taken by the player in order to find the patterns.This can be done using Long Short term Memory.

Weka classification; cross-validation across predefined topic

I am using Weka to classify a dataset. Each data point is in one of five topics that I am trying to generalize across.
I would like to make each topic a test set so that I can train on topics 1-4 and test on topic 5, then train on topics 1, 3, 4 and 5, and test on 2, and so on.
Is there a way to direct Weka to preform this automatically one time with one dataset? That is, can I direct Weka to cross-validate by topic?
I apologize for redundancy if this question has already been asked. If it indeed has, any help in directing me towards the answer would be most appreciated.
Thanks!
There are a few ways that I can think of that may assist in getting the results that you desire:
As you have outlined in your question, you could generate 5 different training sets with the remaining topic as the testing set. Each model would need to be trained individually if you were going to use the Weka interface (Supply the training data, the build a classifier and supply a testing set, repeat). This would likely be quickest if it's a once off.
You may be able to use the FilteredClassifier with the filter of RemoveWithValues. This may be able to remove the training cases of a particular topic if the topic number is an available attribute (I am guessing that this data is not part of the model's data though, so attribute filtering may also be required if using this approach).
If you are willing to use Java to program a solution, you would be able to manipulate the data and build each of the five classifiers in one go. I am thinking that the algorithm for such a model would be as outlined below. If you plan to undertake this process a lot, it may be the better solution.
Algorithm:
for each topic t
training_data = all cases not containing topic t
testing_data = training_set cases containing topic t
build classifier using training_data, testing_data
save classifier
end for

Find best available data in a given data set according to input data using WEKA?

I tried to use a clear title. What I try to achieve is that I have a list of data as below
ID - ID of people, not important in calculation, but need for output to determine the person
Education {1=Degree, 2=Master, 3=PhD}
CGPA - value from 2.00 until 4.00
Computer = {1=Yes, 0 = No} (Computer knowledge)
Oversea = {1 = Yes, 0 = No} (willing to travel oversea)
ID,Education,CGPA,Computer,Oversea
001,3,3.14,1,0
002,1,3.68,1,1
003,2,2.76,0,1
..........
.........
Say I have 1,000 rows with different values. My purpose is, I want to give similar 1 row of data and get the closest record out of 1,000 rows. I am using WEKA.
I am trying to do something like finding the best resume for a particular job.
I have checked and did many examples to understand better about WEKA, but I just cant get it done. I am new to WEKA. I tried classifiers and decision trees but couldnt. I am able to get the prediction out of given data, but I cannot filter data list according to given input.
Any help much appreciated. Any link that directs me to any article about this, or any idea or even any single sparkle will be useful.
Sounds like you want to use a nearest neighbour classifier (IBk in Weka). If you're using the Weka GUI, you can only get the class, so you'll have to implement some code to retrieve the actual nearest neighbour.
Have a look at this question for a way of doing this.