How will we detect an intruder in a WSN since there is no concept of sensing range in Castalia? How will the network sense an object that has entered the WSN field?
There is no such thing as sensing range of sensing devices. There is only sensing sensitivity of the device (the signal threshold to trigger the transducer).
The so-called "sensing range" used in many early WSN papers is a poor abstraction of reality (which unfortunately proved to be long-lived). In order for this abstraction to make any sense we must make arbitrary and limiting assumptions about the physical process that triggers the sensing devices. For example, we must assume that the sources of signal of the physical process we are monitoring (i.e., the intruders in your scenario), all have the same power, and that the medium that allows the physical process signal to propagate does so in a uniform matter. In terms of abstractions, it is much better to think of the probability of sensing by a given sensing device. This depends on the sensitivity of the device, and also on the physical process (how powerful are the signal sources and how they propagate in the medium).
The situation is directly analogous to the so-called "radio transmission range". As the transmission disk model is a simplistic (and usually poor) communications model, the disk sensing range model is similarly simplistic. I suggest you steer clear of such poor abstractions.
I am not sure why you think you need the concept of sensing range to detect an intruder. Castalia models sensing devices and it models physical processes. So a sensing device can be triggered by something happening in the environment. A very simple intrusion model would be that if a sensing device senses a signal above a certain threshold you can say that an intruder was detected. More complex models would require multiple nodes to detect a signal, but that's up to you and how your particular intrusion scenario looks like.
Read section 4.5 and 4.6 of the Castalia User's manual to get a better understanding of Castalia's sensing abstractions. You can also see the Bridge Test simulation scenario to get a taste of something similar to what you want. In that scenario, you have cars driving on a bridge. All nodes sense the cars as they pass near them. There is no question of intrusion detection in that scenario, but you can see how the physical process can be set to model something like intruders.
Related
What is the added value of simscape physical signals compared to normal simulink signals? As far as I can see, from a functional perspective there is no difference between the two types of signals: I can add units to both types, they both have a direction of flow, and they both have similar function blocks like adding, substracting... Only for physical signals the available types of blocks is very limited. Why didn't the matlab guys just use normal simulink lines instead of the physical signals?
Physical signals, unlike Simulink signals, have units associated with them. This means that they follow a number of rules, for example to ensure that the right unit is used (e.g. you can't add kg and m/s). From the documentation:
Using the Physical Signal Ports
The following rules apply to Physical Signal ports:
You can connect Physical Signal ports to other Physical Signal ports with regular connection lines, similar to Simulink signal
connections. These connection lines carry physical signals between
Simscape blocks.
You can connect Physical Signal ports to Simulink ports through special converter blocks. Use the Simulink-PS Converter block to
connect Simulink outports to Physical Signal inports. Use the
PS-Simulink Converter block to connect Physical Signal outports to
Simulink inports.
Physical Signals can have units associated with them. Simscape block dialogs let you specify the units along with the parameter
values, where appropriate. Use the converter blocks to associate units
with an input signal and to specify the desired output signal units.
Any sensor block in Simscape (in whatever physical domain) will output a physical signal. You can then convert it into a normal Simulink for feed to your controller. Similarly, any source block in Simscape (in whatever physical domain) will take a physical signal as input.
I suggest you just read the Simscape product page
In particular,
Simscape components represent physical elements, such as pumps, motors, and op-amps. Lines in your model that connect these components correspond to physical connections in the real system that transmit power.
Accompanying that description is the following image, which shows how Simscape models can be far more intuitive to build than a model which uses standard signal. This means models are far more maintainable and clearer to, for example, engineers who may not have a comp-sci background.
Let's delve into what a "physical connection" is somewhat.
[Simscape] employs the Physical Network approach, which differs from the standard Simulink modeling approach and is particularly suited to simulating systems that consist of real physical components.
[ ... ]
Each system is represented as consisting of functional elements that interact with each other by exchanging energy through their ports.
You stated in your question that both methods have a flow direction. This is wrong!
Simscape blocks try and balance the energy between the inlet(s) and outlet(s). For instance a fixed orifice in a fluid system may have high pressure on one side. Simscape will try and solve the pressure balance each iteration. You would need some custom Simulink subsystem to achieve this if not for Simscape.
What is the added value of simscape physical signals compared to normal simulink signals?
What is it that you think Simscape physical signals provide? Is it one number? How do you solve a mass-spring-damper system with just position? It's position AND it's speed AND it's acceleration.
I can add units to both types
No you can't. You put whatever you want in Simulink. You don't get to choose anything about what's in the physical signal in Simscape. You can specify units in the blocks that the signals connect, but you don't get to pick what the pipe itself is carrying.
they both have a direction of flow
No they don't. Your head and your torso are connected. There's no directionality to this. They're just connected. The physical signal is likewise just showing that (things) are physically connected. Again, the mass-spring-damper system: If the damper points to the mass, and the spring points to the mass, then is there any possibility that the damper could affect the spring? Yes, of course. The damper affects the spring because the damper affects the mass and the mass affects the spring.
The spring affects the mass, and the mass affects the spring. The signal is bidirectional. You're confusing signal directionality with kinematic chains.
they both have similar function blocks like adding, substracting
If you're on a train that's going 30 mph, and you're walking forward at 3 mph, how fast are you going relative to the world frame? What if you're walking backward? There is a physical meaning in adding and subtracting physical signals.
[For] physical signals the available types of [function blocks are] very limited
What is it that you're thinking they're missing? Can you also provide a description of what the physical meaning of that function block would be?
Why didn't the matlab guys just use normal simulink lines instead of the physical signals?
Because they're not the same. The biggest point is probably that Simscape is signal + derivative + second derivative, but again they're just conceptually different. Simulink is an easy way to write code - do this step, move along the arrow, do the next step, etc. Simscape is a pictorial representation of a physical system. The physical signal lines just show that things are connected. The system gets solved simultaneously.
I don't think it's mainly about the enforcement of physical signal units, nice though this is.
I think it's about the solver - and before it gets to the solver, about the choice of states and equation causality - rearranging the equations ready to be solved.
Simulink doesn't have any truck with this and just gets straight on with integrating signals as a succession of samples. I know it gets complicated with variable step solvers, but they are only doing extra fancy numerical analysis with the sampled data. Integration and the here-and-now is what it's all about!
Simscape just starts with a bucket of variables and a bucket of equations that variously depend on said variables. A 'bipartite graph', I believe they call it.
Just as we have to navigate a route through simultaneous equations to pick off the simple ones and substitute (or the matrix equivalents of this) Simscape has to do likewise in software so wants to keep alive augmented info on signals like which equations they are in and whether it knows or can easily obtain their derivatives, what they are, etc. Physical signals behave for us users just like Simulink signals, but I reckon they are there to provide the valuable service to Simscape of keeping this augmented info alive and linked between blocks so that one massive matrix equation can be formed for the whole system, not separate ones that get sampled as Simulink systems between Simulink blocks.
This rearrangement of equations ready for the more conventional solver getting stuck in is a black art indeed! We learn very little of how Simscape does it from the MathWorks docs, but you can install OpenModelica for free and see how that does it.
I'd like to use an optical flow system to get velocities from surrounding environment. I've read papers about how optical flow works, but they don't treat details about optic sensors.
My question is: How do I determine how much computational power is required to perform optical flow analysis?
I'd like to use a low-power system (like microcontrollers), but I don't know what kind of camera I could use with such a system. I mean, could it be color or does it need to be B/W? Rolling shutter or global shutter? Which frame rate or number of pixels?
I'd like to specify the system myself but, without knowing how those camera attributes impact the processing load, I'm not sure where to start.
As Chuck already said in the comment. You first need to start with something. Opticalflow calculation really depends on what you are using it for and what you are trying to achieve. For realtime applications you might want to consider using faster processors (this is always true though).
Continuing to my answer.
Opticalflow calculation performance depends on few main things:
The optical-flow method you choose (dense or sparse), you can read more about it here and here. Of course that you should take into account not only that sparse is faster than dense, also that sparse might be less accurate in some cases. Again, this depends on what you're trying to achieve.
In addition, you will see that there are different optical-flow algorithms. Some might be faster than others. There are many algorithms such as Lucas-Kanade, Horn-Schunck, TVL1, Farneback, etc.
Most optical-flow methods from libraries such as OpenCV gives you the ability to change some parameters in order to play with the trade-off between accuracy and performance. See this and also check the OpenCV methods such as this and this for example - see the different arguments.
The resolution of your image. Smaller image usually means faster calculation.
Few things you might also want to consider:
If you are using a processor that has multiple cores, make sure that you are using all the cores in the optical-flow calculation. Some libraries may already do this for you, but in some cases you will need to do it by yourself. Take a look at my question and answer in this post, it might give you some idea and help you getting starting with such case.
If you want more accurate optical-flow results you must use global shutter camera. Rolling shutter cameras, such as most of the web-cams, will give you an extra error you don't want.
You don't need color image, if you have a grayscale camera it will be even better. If not, you will need to convert it to grayscale (not B/W) for faster performance as well.
Some libraries such as OpenCV has an option (in some cases) to run these algorithms on a GPU. If using a GPU is an option you might want to consider this as well.
From my own experience, the main thing that gave me a boost in performance was changing my resolution from 640x480 to 320x240 and even 160x120. In my case it didn't really hurt the accuracy.
I used an Odroid U3 mini-pc with OpenCV PyrLK algorithm and input frames of 320x240 resolution. After applying what's described here (splitting the image to 4 for parallel calculation) it worked pretty well (realtime).
The answer given by Sarid has some strong points, and many of them are shared by researchers around the world. My opinions are shared by anyone who has actually worked with these topics in the real-world setting.... with real world, i mean implementing optical flow in drones, on mobile phones and IP cameras that are not sitting in a protected office, and where other systems (such as humans) need to interact and be co-dependent.
First of all, depending on your problem, you may want to invest time in looking for ready-made solutions. Optical flow sensors are readily available, cheap and robust (but usually not strong in accuracy). These are the kind of sensors you find in optical mice. They are low power, and easily interfaced with micro-controllers. Some have staggering sample rates of thousands of fps. They commonly have low spatial resolution however, and (to emphasize) high robustness but low accuracy.
If instead you are looking for the kind of optical flow that can be used for shape from motion, pedestrian detection and video-encoding, for example, then you are probably better off to look for something more advanced, and thats where Sarids answer becomes relevant.
Since your question has been migrated from robotics stack exchange, I am going to assume you are interested applications close to machine control and human machine interaction. In that case, the most important aspects are the ones usually most ignored by people working in the field of optical flow estimation, namely:
Latency. If you have a human interfacing at the front-end... then the common term is "glass-to-glass latency". This is completely different from the fps of your system, which is connected to throughput. If you find that you are in a discussion with someone, and they do not understand the difference between latency and fps, then they are not the expert you are interested in. For example, almost all researchers in computer vision who do GPU implementations of optical flow add massive latency by allowing for frame delays and ineffecient memory handling (inefficient from perspective of latency, but efficient in terms of throughput and hard-ware utilization). Consider the problem of controlling a drone, say make it self-stabilizing, it is better to receive a bad optical flow estimation 10 ms earlier, then a good one with 10 ms extra delay.... especially if the optical system does not give you any upper bounds of the delay for any given time.
Algorithm stability. This is completely different from accuracy. Accuracy is what 99% of all research in optical flow has been obsessing about for the last 30 years. Stability is not at all something evaluated in the Middlebury benchmark for example. Stability deals with how small changes in your data will guarantee small changes in the estimated optical flow. While some good work has been done in the community (on robust statistics most interestingly) in the end the final evaluation of any algortihm disregards stability. Consider the optical mouse as a good example. The first generations of optical mice had higher accuracy (the average error from the true motion was smaller) but they had lower stability (especially when you ran the mice over "bad textures", with rotational motions). Later generations of optical mouse have worse accuracy, but are focusing on the stability, as that is the most important thing. You dont experience the mouse cursor jumping around as much as you did the earlier days of the devices.... but if you move the mouse on your mat, left and right repeatedly, you will see the cursor slowly drifting (i.e. low accuracy).
Heat. Any device that will estimate high accuracy optical flow, will require lots of computations. When it comes to computations per watt, GPUs are not that good. In drones, you may be able to get away with this, because it is a setting where you have active cooling as a by-product of the propulsion system. In the real-world, you most often can not assume active cooling nor unlimited power supply.
To conclude, its a fascinating area, and I hope you have a great experience coding solutions.
Is there a machine learning concept (algorithm or multi-classifier system) that can detect the variance of network attacks(or try to).
One of the biggest problems for signature based intrusion detection systems is the inability to detect new or variant attacks.
Reading up, anomaly detection seems to still be a statistical based en-devour it refers to detecting patterns in a given data set which isn't the same as detecting variation in packet payloads. Anomaly based NIDS monitors network traffic and compares it against an established baseline of a normal traffic profile. The baseline characterizes what is "normal" for the network - such as the normal bandwidth usage, the common protocols used, correct combinations of ports numbers and devices etc
Say some one uses Virus A to propagate through a network then some one writes a rule to stop Virus A but another person writes a "variation" of Virus A called Virus B purely for the purposes of evading that initial rule but still using most if not all of the same tactics/code. Is there not a way to detect variance?
If there is whats the umbrella term it would come under, as ive been under the illusion that anomaly detection was it.
Could machine learning be used for pattern recognition(rather than pattern matching) at the packet payload level?
i think your intution to look at machine learning techniques is correct, or will turn out to be correct (One of the biggest problems for signature based intrusion detection systems is the inability to detect new or variant attacks.) The superior performance of ML techiques is in general due to the ability of these algorithms to generalize (a multiplicity of soft constraints rather than a few hard constraints). and to adapt (updates based on new training instances to frustrate simple countermeasures)--two attributes that i would imagine are crucial for identifying network attacks.
The theoretical promise aside, there are practical difficulties with applying ML techniques to problems like the one recited in the OP. By far the most significant is the difficultly in gathering data to train the classifier. In particular, reliably labeling data points as "intrusion" is probably not easy; likewise, my guess is that these instances are sparsely distributed in the raw data."
I suppose it's this limitation that has led to the increased interest (as evidenced at least by the published literature) in applying unsupervised ML techniques to problems like network intrusion detection.
Unsupervised techniques differ from supervised techniques in that the data is fed to the algorithms without a response variable (i.e., without the class labels). In these cases you are relying on the algorithm to discern structure in the data--i.e., some inherent ordering in the data into reasonably stable groups or clusters (possibly what you the OP had in mind by "variance." So with an unsupervised technique, there is no need to explicitly show the algorithm instances of each class, nor is it necessary to establish baseline measurements, etc.
The most frequently used unsupervised ML technique applied to problems of this type is probably the Kohonen Map (also sometimes called self-organizing map or SOM.)
i use Kohonen Maps frequently, but so far not for this purpose. There are however, numerous published reports of their successful application in your domain of interest, e.g.,
Dynamic Intrusion Detection Using Self-Organizing Maps
Multiple Self-Organizing Maps for Intrusion Detection
I know MATLAB has at least one available implementation of Kohonen Map--the SOM Toolbox. The homepage for this Toolbox also contains a brief introduction to Kohonen Maps.
Seeing that as as far as we know, one half of your brain is logical and the other half of your brain is emotional, and that the wants of the emotional side are fed to the logical side in order to fulfill those wants; has there been any research done in connecting two separate neural networks to one another (one trained to be emotional, and one trained to be logical) to see if it would result in almost a free-will sort of "brain"?
I don't really know anything about neural networks except that they were modeled after the biological synapses in the human brain, which is why I ask.
I'm not even sure if this would be possible considering that even a trained neural network sometimes doesn't act logically (a.k.a. do what you thought you trained it to do).
First, most modern neural networks aren't really modeled after biological synapses. They use an Artificial Neuron which allowed Back Propagation to work rather than a Perceptron which is a much more accurate representation.
When you feed the output of one network into the input of another network, you've really just created one larger network, not two separate networks. It just happens that in this case portions of the networks would be trained independently.
That said, all neural networks have to be trained. Which means you need sample input and sample output. You are looking to create a decision engine of sorts I suppose. So you would need to create a dataset where it makes sense that there might be an emotional and rational response, such as purchasing an item. You'd have to train the 'rational' network to accept as a set of inputs the output of an 'emotional' network. Which means you are really just training the rational decision engine to be responsive based on the leve of 'distress' caused by the emotional network.
Just my two cents.
I have also heard of one hemisphere being called "divergent" and one "convergent". This may not make any more sense than emotional vs logical, but it does hint at how you might model it more easily. I don't know how the brain achieves some of the impressive computational feats it does, but I wouldn't be very surprised if all revolved around balance, but maybe that is just one of the baises you have when you are a brain with two hemipheres (or any even number) :D
A balance between convergence and divergence is the crux of the creativity inherent in evolution. Replicating this with neural nets sounds promising to me. Suppose you make one learning system that generalizes and keeps representations of only the typical groups of patterns it is shown. Then you take another and make it generate all the in-betweens and mutants of the patterns it is shown. Then you feed them to eachother in a circle, and poof, you have made something really interesting!
It's even more complex than that, unbelievably. The left hemisphere works on a set of logical rules, it uses these to predict its environment and categorize input. It also infers rules and stores them for future use. The right hemisphere is based, as you said, on emotion, but also on memory of single, unique or emotionally relevant occurrences. A software implementation should also be able to retrieve and store these two data types and exchange "opinions" about them.
While the left hemisphere of the brain may be more involved in making emotional decisions, emotion itself is unlikely to occur exclusively in one side of the brain, and the interplay between emotions and rational thought within the brain is likely to be substantially more complex than having two completely separate circuits. For instance, a study on rhesus macaques found that dopamine and other hormones associated with emotional responses essentially implements temporal difference learning within the brain (I'm still looking for a link to it). This suggests that separating emotional and rational thought into two separate neural networks probably wouldn't be practical, even if we had the resources to build neural networks on the scale of brain hemispheres (which we don't, or at least not within most research budgets).
This idea is supported by Sloman and Croucher's suggestion that emotion will likely be an unavoidable emergent property of a sufficiently advanced intelligent system. Such systems (discussed in detail in the paper) will be much more complex than straight-up neural nets. More importantly, though, the emotions won't be something that you can localize to one part of the system.
I am writing a report, and I would like to know, in your opinion, which open source physical simulation methods (like Molecular Dynamics, Brownian Dynamics, etc) and not ported yet, would be worth to port to GPU or another special hardware that can potentially speedup the calculation.
Links to the projects would be really appreciated.
Thanks in advance
Any physical simulation technique, be it finite difference, finite element, or boundary element, could benefit from a port to GPU. Same for Monte Carlo simulations of financial models. Anything that could use that smoking floating point processing, really.
I am currently working on quantum chemistry application on GPU. as far as I am aware, quantum chemistry is one of most demanding areas, in terms of total cpu time. there has been a number of papers regarding GPU and quantum chemistry, you can research those.
As far as methods, all of them are open source. are you asking about particular program? Then you can look at pyquante or mpqc. for molecular dynamics, look at hoomd. you can also Google QCD on GPU.