I am implementing a fuzzy logic based decision support system that uses nine variables but group each three together to form an output then take these three output to make the final output of the system.
I am using fuzzy logic toolbox in matlab, I made each one of these three outputs but I can't figure out how I can make these outputs as inputs again for the final output.
The system is shown in this picture:
system picture
Correct me if I'm wrong, but I think you would need two FIS separately , where the inputs from the second one will be the outputs from the first one.
Sorry If I didn't help much :/
Related
I'm running many linear regressions and probit models with a massive number of covariates. That means, every time Stata finished to compute and print the results, produce a huge list of coefficients. And, each time I have to move until the beginning of such a list, where the main coefficients are printed.
I would like to know if there is a way to avoid that. I was looking for an option to print only a certain number of lines. My sencond try was running the regression using -quietly- option and then trying to print a given number of lines. But I'm not really familiar with Stata. I usually work in R, but I have to use Stata this time, that's why I'm struggling with this commercial software.
For linear regressions the -areg- function offers a partial solution for my issue, but that function only allows me to "absorb" a single factor variable. But I need to absorb more variables and also run probit models. Hence, -areg- don't work to me.
Anyone has a trick to solve this? Only print a selection of covariates in Stata?
UPDATE:
A minimal example: I have the following linear regression with many places and time units as FEs.
regress depVar Var1 Var2-Var15 i.place i.time [pw = myweigth], cluster(ID)
I'm interested on see only the coefficients of Var*. But every time I run the regression I got thousands of coefficients for the FEs.
I posted the same question on reddit, and I got the following comments:
https://www.reddit.com/r/stata/comments/fwtds4/cutting_down_stata_results/
What is pretty much what I was looking for. Basically, is solved via estout package, and its -estout- and -esttab- functions:
estout myRegression: quietly ///
regress depVar Var1 Var2-Var15 i.place i.time [pw = myweigth], cluster(ID)
esttab myRegression, drop(place time)
Maybe someone can enrich this approach. Thanks!
I know Matlab has the function TrainAutoencoder(input, settings) to create and train an autoencoder. The result is capable of running the two functions of "Encode" and "Decode".
But this is only applicable to the case of normal autoencoders. What if you want to have a denoising autoencoder? I searched and found some sample codes, where they used the "Network" function to convert the autoencoder to a normal network and then Train(network, noisyInput, smoothOutput)like a denoising autoencoder.
But there are multiple missing parts:
How to use this new network object to "encode" new data points? it doesn't support the encode().
How to get the "latent" variables to the features, out of this "network'?
I appreciate if anyone could help me resolve this issue.
Thanks,
-Moein
At present (2019a), MATALAB does not permit users to add layers manually in autoencoder. If you want to build up your own, you will have start from the scratch by using layers provided by MATLAB;
In order to to use TrainNetwork(...) to train your model, you will have you find out a way to insert your data into an object called imDatastore. The difficulty for autoencoder's data is that there is NO label, which is required by imDatastore, hence you will have to find out a smart way to avoid it--essentially you are to deal with a so-called OCC (One Class Classification) problem.
https://www.mathworks.com/help/matlab/ref/matlab.io.datastore.imagedatastore.html
Use activations(...) to dump outputs from intermediate (hidden) layers
https://www.mathworks.com/help/deeplearning/ref/activations.html?searchHighlight=activations&s_tid=doc_srchtitle
I swang between using MATLAB and Python (Keras) for deep learning for a couple of weeks, eventually I chose the latter, albeit I am a long-term and loyal user to MATLAB and a rookie to Python. My two cents are that there are too many restrictions in the former regarding deep learning.
Good luck.:-)
If you 'simulation' means prediction/inference, simply use activations(...) to dump outputs from any intermediate (hidden) layers as I mentioned earlier so that you can check them.
Another way is that you construct an identical network but with the encoding part only, copy your trained parameters into it, and feed your simulated signals.
I'm trying to make a neural network try to figure out the meaning of input(keyboard keys in this case) according to the user.
I have multiple possible output "commands" that the NN can interpret the inputs to mean, and at each state certain outputs can count as beneficial while others are a detriment
When the NN starts up for the first time, no input should have any particular meaning to it but as time goes on I want the NN to be able to figure out what the user most likely meant.
I've tried a Multilayer perceptron NN that has as many input nodes as there are physical inputs and as many output nodes as there are commands and a number of nodes equal to the sum of the other two layers as a single hidden layer, in this case it is then a 5,15,10.
The NN assumes that the user will only make moves that are in the NN's best interest.
So far it seems the NN is just figuring out what is the command it can take that will most likely result in a beneficial move, regardless of the input key rather than trying to figure out what key should result in what move according to the user.
Because of this I'm wondering (most likely wrong) if I should produce a separate NN for each input to try and figure out the current output according to the user.
Is there a different type of NN I should look into that will work better, and is there a recommended configuration for this problem?
I'll be happy with some recommendations of reading material that would help in this particular problem.
I'm at best an amateur in NN and would like to learn a lot more about the whole field, But I'm trying to focus my efforts on this problem for now.
Accordng to me you want the output to be according to behaviour of the player as number of inputs are more than in actual case. So according to me there should be some type of memory for the actions taken by the player in order to find the patterns.This can be done using Long Short term Memory.
New with Matlab.
When I try to load my own date using the NN pattern recognition app window, I can load the source data, but not the target (it is never on the drop down list). Both source and target are in the same directory. Source is 5000 observations with 400 vars per observation and target can take on 10 different values (recognizing digits). Any Ideas?
Before you do anything with your own data you might want to try out the example data sets available in the toolbox. That should make many problems easier to find later on because they definitely work, so you can see what's wrong with your code.
Regarding your actual question: Without more details, e.g. what your matrices contain and what their dimensions are, it's hard to help you. In your case some of the problems mentioned here might be similar to yours:
http://www.mathworks.com/matlabcentral/answers/17531-problem-with-targets-in-nprtool
From what I understand about nprtool your targets have to consist of a matrix with only one 1 (for the correct class) in either row or column (depending on the input matrix), so make sure that's the case.
I created a subsystem in Simulink with mask underneath. There are all sorts of control and calculation inside this subsystem. Now I have to duplicate this subsystem for one hundred thousand times because I need to connect one hundred thousands of this block in series.
What I have tried, I used the commands “add_block” and “add_line” where I can just type it in the Matlab command and the blocks and lines are added automatically.
What I wish to do now is,
I want to have 100 signals in a single subsystem, so instead of using one hundred thousand subsystem, I will only need one thousand of this subsystem, I understand that this can be done by vectorization.
I have a very limited knowledge on using vectorization feature in Matlab/Simulink. I appreciate if anyone of you could provide me a great reference on how to do this?
What I found here is something like this which I could not link it to my issue above: http://www.mathworks.co.uk/help/matlab/matlab_prog/vectorization.html
The other thing I found is by "using vectorization for most components. Most components are vectorized if they have a vectorized input signal or if one of their parameter is specified as a vector."
However, I could not find any further information/details, appreciate if anyone of you could give opinion on this? Thanks!