how to use genetic algorithm in wireless sensor network - networkx

i want to implement genetic algorithm in Wireless sensor network to optimize the deployment of the sensors in a zone, so the minimum of sensors cover the maximum of targets, can anyone please help in any way, thank you.

You would need to define a GA where your decision variables are the position of each sensor.
If you have n sensors, your chromosome would have n * dimensions elements, e.g., if you have to define for each sensor its x and y position your chromosome would have n * 2 elements.
So given this vector (chromosome) that defines the position of each sensor you just have to define a function that computes the coverage given the position of the sensors. In this way, you can compute the fitness function of every chromosome (compute the solution given by the chromosome).

Related

How do I model the RF propagation of a custom UWB transmission using MATLAB?

I've successfully plotted the signal strength coverage map for a generic narrow band (read single frequency) horn antenna using MATLAB's in-built functions design(), txsite() and coverage().
MATLAB uses the 'Longley-Rice' propagation model when terrain data is present which I downloaded and introduced using addCustomTerrain().
However, I don't want my antenna to be narrow band operating at a single frequency.
I want to model the coverage map I would get on location with a known ultra-wide band (UWB) transient pulsed signal. I have the time domain E-field of this waveform as well as the FFT and energy spectral density.
My plan was to loop over many tx antennas, each having an operating frequency equal to one of the ~1000 frequency bins in the UWB spectral content and an output power equal to the scaled energy spectral density (ESD) multiplied by the frequency step size (df) and divided by the total time period of the measured pulsed signal (to get power). P = ESD * (df/T).
However, when I ran this looped code, I got:
"Error using em.EmStructures/savesolution
The calculated result is invalid; possible cause is a coarse mesh. Please consider refining the mesh
manually."
I assume this means MATLAB can't model 1000 different antennas on the same exact location.. but any idea on this error?
Is what I'm trying to do possible in MATLAB?
Are there alternative methods?
Thank you for any help in advance!

How to create proper feedforward neural network with evolutionary algorithm

I've created a 2D game where you use map editor to place cars, obstacles and destination point where cars should go.
The idea is that these cars will be controlled by generation of feedforward neural networks. But I'm not sure how information should be represented in input layer and how exactly evolution should be like, so I'll explain my idea and it would be great to have advice on how to make this idea better especially if something will not work at all.
Input layer values:
Neurons in between (0,1) representing distance to the obstacle
Neuron in between (-1,1) representing the speed of car and its direction (-1 - max speed backwards, 0 - no speed, 0.5 - half of max speed forward)
Two neurons in between (-1,1) representing cos and sin (or 2*asin/Pi and 2*(acos-pi/2)/Pi) of vector from car to destination relative to some canvas (map) constant axis.
Output layer values:
Neuron between (-1,1) representing acceleration of car and its direction
Neuron between (-1,1) representing where and how fast car will turn
Looking at values, I'm thinking about using tanh function everywhere. But is this a good idea to use a single negative/positive value to determine a direction (as input or output)? Or is it a good idea to use two neurons to tell to a neural network where it should go (obviously can't tell with single value) and etc?
I imagine evolution itself to be be about switching some weight and bias values mostly between best networks (depending on fitness) and adding small random numbers to some weights and biases to mutate (where random number abs will depend on fitness in order to avoid destructive huge changes in good networks).

modeling an relationship between sensor values and position (angle and distance) to a target

I want to derive a simple model that can predict a current position of an object with respect of a target.
To be more specific, I have a head that has 4 identical light sensors placed between 90 degree. There is a light source (LED) emitting visual light. Since each sensor has angle spectrum (maximum at 90 degree and decrease its sensitivity while the angle of the incident of light increases), the receiving value at each sensor is determined by the angle and distance of the head with respect of the target.
I measured the values at four sensors at various angles and distances.
Each sensor has maximum values around 9.5 when incoming light is low (either the sensor is far from the target or the sensor faces away the target), while the value decreases as the sensor gets close to the target or faces directly toward to the target.
my inputs and outputs look like
[0.1234 0.0124 8.342 9.232] = [angle, distance]: an example of the head placed toward next to the light.
four inputs from the sensors and two outputs for the angle and distance.
What strategy can I implement to derive an equation that I can use for predicting the angle and distance by putting current incoming sensor values?
I was thinking of multivariate regression, but my outputs are not a single scalar (more of vectors). I am not sure it will work.
Therefore, I am writing here for asking some help.
Any help would be appreciated.
Thanks
Your idea about multivariate regression looks reasonable.
IMHO you need to train two models instead of one. The first one will predict angle, and the second one will predict distance.
Why you want to combine these two models? This is looks strange in the sense of the optimization metric. When you build angle model you minimize the error in radians. When you build distance model you minimize the error in meters. So what the metric you will minimize in single model case?
I believe next links will be useful for you:
https://www.mathworks.com/help/curvefit/surface-fitting.html
https://www.mathworks.com/help/matlab/math/example-curve-fitting-via-optimization.html
Note: in some cases the data normalization (for example via zscore) greatly increases the fitting performance.
P.S. Try also ask at the https://stats.stackexchange.com/

correlated outputs in neural network

I have a data set (containing some inputs and one output and GPS coordinates show the location of each sample) and I need to predict output with a neural network (NN). But the problem is that each element of output vector is correlated with other elements according to the place they are measured in that close samples tend to be more similar than distant samples and so the distances between the samples are important. Is there any way to add this extra information into NN structure?

Few questions about kohonen neural network

I have big data set (time-series, about 50 parameters/values). I want to use Kohonen network to group similar data rows. I've read some about Kohonen neural networks, i understand idea of Kohonen network, but:
I don't know how to implement Kohonen with so many dimensions. I found example on CodeProject, but only with 2 or 3 dimensional input vector. When i have 50 parameters - shall i create 50 weights in my neurons?
I don't know how to update weights of winning neuron (how to calculate new weights?).
My english is not perfect and I don't understand everything I read about Kohonen network, especially descriptions of variables in formulas, thats why im asking.
One should distinguish the dimensionality of the map, which is usually low (e.g. 2 in the common case of a rectangular grid) and the dimensionality of the reference vectors which can be arbitrarily high without problems.
Look at http://www.psychology.mcmaster.ca/4i03/demos/competitive-demo.html for a nice example with 49-dimensional input vectors (7x7 pixel images). The Kohonen map in this case has the form of a one-dimensional ring of 8 units.
See also http://www.demogng.de for a java simulator for various Kohonen-like networks including ring-shaped ones like the one at McMasters. The reference vectors, however, are all 2-dimensional, but only for easier display. They could have arbitrary high dimensions without any change in the algorithms.
Yes, you would need 50 neurons. However, these types of networks are usually low dimensional as described in this self-organizing map article. I have never seen them use more than a few inputs.
You have to use an update formula. From the same article: Wv(s + 1) = Wv(s) + Θ(u, v, s) α(s)(D(t) - Wv(s))
yes, you'll need 50 inputs for each neuron
you basically do a linear interpolation between the neurons and the target (input) neuron, and use W(s + 1) = W(s) + Θ() * α(s) * (Input(t) - W(s)) with Θ being your neighbourhood function.
and you should update all your neurons, not only the winner
which function you use as a neighbourhood function depends on your actual problem.
a common property of such a function is that it has a value 1 when i=k and falls off with the distance euclidian distance. additionally it shrinks with time (in order to localize clusters).
simple neighbourhood functions include linear interpolation (up to a "maximum distance") or a gaussian function