How to use the residual energy values inside the TCL script using NS2? - cluster-analysis

I am working on k-means algorithm and I need to elect CHs duo to the residual energy values. Is there such a way to extract the residual energy values while executing the TCL script, and simultaneously use these values while running the TCL script? So that I can use the extracted values to elect the CHs.
I know that if I create an energy model then the residual energy will be print in the trace file, but I am looking for a way to use these values before print them out in the trace file

Related

Input signal optimization

I have a system, described by a black-box, that takes as input a signal in time (let's say something similar to a sine wave) and returns a single scalar as output.
My goal is to find the optimum signal that maximizes/minimizes the output. As constraints, the time average of the signal must be kept constant and the minimum and maximum value of the signal must be within a specific range.
The parameters to be optimized are all the points describing my signal in time.
I want to use the scipy.optimize library to minimize the output of my black-box system however, everytime the parameters are modified by the optimizer, I need to smooth the input signal in order to avoid discontinuity.
My question is: is it possible to access the input variables after they are modified by the algorithm, smooth them, and substitute them at every iteration of the algorithm?

Neural Networks and correlation between input and output

I am trying to fit some input to predict an output in Matlab using fitnet neural networks, but I am concerned in finding which input candidate vector would correlate the most with the output as a preprocessing step prior to my neural network training.
In the figure below the output in yellow has five input candidates where I need to chose only from. What command should I use in Matlab and how should I prepare that data (repeated around 1000 time) so I can get a clear correlation between the input candidate and the output.
To find out correlation between given feature and target variable you can use R = corrcoef(A,B), but... do not do it!.
This process makes no sense and will be probably harmfull for the whole process. You are going to remove part of information from your data so only features which have idependent, linear realtion to target variable persist. Then, you will apply highly-non linear model which exploits co-occurences and features correlations. These two steps are completely incompatible. The only valid relation is - if your data is very simple and it can be pretty much modeled with linear model, then neural net will work as well. But then there is no point in using a neural net in the first place, just apply linear regression. Consequently: do not perform feature selection unless you have to. Try to build a good model without doing that, and if you have to remove some features (maybe getting them is expensive process?) use post-hoc model analysis to remove features which are not used by this model. Do not split your problem to multiple, independent processes if you do not have to (unless you can show that this decomposition does not harm the process, but in case of feature selection + regressor this is not true, as you cannot construct a valid feature selection supervision without trained regressor).

How to model "for loop" & "memory things" in Matlab/Simulink

In my undergrad thesis I am creating a neural network to control automated shifting algorithm for a vehicle.
I have created the nn from scratch starting from .m script which works correctly. I tested it to recognize some shapes.
A brief background information;
NN rewires neurons which are mathematical blocks located in a layer. There are multiple layers. output of a layer is input of preceding layer. Actual output is subtracted from known output and error is obtained by this manner. By using back propagation algorithm which are some algebraic equation the coefficient of neurons are updated.
What I want to do is;
in code there are 6 input matrices, don't have to be matrix just anything and corresponding outputs. lets call them as x(i) matrices and y(i) vectors. In for loop I go through each matrix and vector to teach the network. Finally by using last known updated coeffs networks give some responses according to unknown input.
I couldn't find the way that, how to simulate that for loop in simulink to go through each different input and output pairs. When the network is done with one pair it should change the input and compare with corresponding output then update the coefficient matrices.
I model the layers as given and just fed with one input but I need multiple.
When it comes to automatic transmission control issue it should do all this real time. It should continuously read the output and updates the coeffs and gives the decision.
Check out the "For each Subsystem". Exists since 2011b
To create the input signals you use the "Concatenate" Block which would have six inputs in your case, and a three dimensional output x.dim = [1x20x6] then you could iterate over the third dimension...
A very useful pattern to create smaller models that run faster and to keep your code DRY (Dont repeat yourself)

Using System Identification Toolbox transfer function with Simulink

I believe I am doing something fundamentally wrong when trying to import and test a transfer function in Simulink which was created within the System Identification Toolbox (SIT).
To give a simple example of what I am doing.
I have an input which is an offset sinusoidal wave from 12 seconds to 25 seconds with an amplitude of 1 and a frequency of 1.5rad/s which gives a measured output.
I have used SIT to create a simple 2 pole 1 zero transfer function which gives the following agreement:
I have then tried to import this transfer function into Simulink for investigation in the following configuration which has a sinusoidal input of frequency 1.5rad/s and a starting t=12. The LTI system block refers to the transfer function variable within the workspace:
When I run this simulation for 13 seconds the input to the block is as expected but the post transfer function signal shows little agreement with what would be expected and is an order of magnitude out.
pre:
post:
Could someone give any insight into where I am going wrong and why the output from the tf in simulink shows little resemblance to the model output displayed in the SIT. I have a basic grasp of control theory but I am struggling to make sense of this.
This could be due to different initial conditions used in SimuLink and the SI Toolbox, the latter should estimate initial conditions with the model, while Simulink does nothing special with initial conditions unless you specify them yourself.
To me it seems that your original signals are in periodic regime, since your output looks almost like a sine wave as well. In periodic regime, initial conditions have little effect. You can verify my assumption by simulating your model for a longer amount of time: if at the end, your signal reaches the right amplitude and phase lag as in your data, you will know that the initial conditions were wrong.
In any case, you can get the estimated initial state from the toolbox, I think using the InitialState property of the resulting object.
Another thing that might go wrong, is the time discretization that you use in Simulink in case you estimated a continuous time model (one in the Laplace variable s, not in z or q).
edit: In that case I would recommend you check what Simulink uses to discretize your CT model, by using c2d in MATLAB and a setup like the one below in Simulink. In MATLAB you can also "simulate" the response to a CT model using lsim, where you have to specify a discretization method.
This set-up allows you to load in a CT model and a discretized variant (in this case a state-space representation). By comparing the signals, you can see whether the discretization method you use is the same one that SimuLink uses (this depends on the integration method you set in the settings).

MATLAB: K Means clustering With varying centroids

I'm created a code book based on k-means clustering algorithm.But the algorithm didn't converge to an optimal code book, each time, the cluster centroids are varying(because of random selection of initial seeds). There is an option in Matlab to give an initial matrix to K-Means.But how we can can select the initial code book from a large data set? Is there any other way to get a unique code book using K-means?
It's somewhat standard to run k-means multiple times using different initial states (e.g., initial seeds) and choose the result with the lowest error as the best result.
It's also typical to seed k-means by randomly choosing k elements from your data set as the initial seeds.
Since by default MATLAB's K-Means uses the K-MEans++ algorithm for initialization it means it uses random numbers.
Hence each call (For sequential calls) to K-Means will probably produce different results.
You have 3 options to make this deterministic:
Set MATLAB's Random Number Generator state to certain state before calling K-Means.
Use the stream option in K-Means options to set the stream inside K-Means.
Write your own version of K-Means which uses a deterministic way to initialize K-Means.