I'm beginner in contiki in cooja. I want to simulate two networks. each networks has some sensor nodes and a sink node. Sensor's firmware is same but only for example sensors of one network sense temperature and another sense acceleration. The networks communicate with other through their sink node. I want to use examples in contiki os. It's not important what protocol is used. My object is only simulation and I want to understand code of example. I check udp-sink.c and udp-sender.c in examples/ipv6/rpl-collect path for my work. I want to use simple example. What a suggestion for source code of nodes and what reference is there for understanding of contiki codes?
Related
I am learning how to build a thermo-fluid model with Modelica, I notice that in order to reduce the nonlinearity, it is recommended to use two different models: flow model and volume model, here is the explanation I found in a commercial library, but I am not sure how to arrange the equation in the components to realize this idea.
So I am looking for some papers and examples about how to use this idea in Modelica coding.
The flow port which indicates that a flow model is directly coupled to
it on the inside of the component, i.e. pressure from this port is
further processed to compute e.g. mass flow rate. It is recommended to
connect this port with a corresponding VolumePort in order to obtain
an alternating flow model-volume system model structure. If two models
are connected with their ports being both of this class, non-linear
equation systems for the algebraic pressure in the ports may be
created and require an initial guess value.
I am trying to train the robot for specific actions such as grasping or pointing by using the RNN.
The robot is composed of one arm and a head containing camera in it. Also the workspace will be the small table so that the arm and objects can be located.
The input of the recurrent neural network wiil be the image frame of every time steps from the camera and the output will be the target motor angle of next frame of the robot arm.
When the current image frame is fed to the network, the network outputs the motor value of arm for the next frame. And when the arm reaches the next position, the input frame in that position is again goes to the network and it again yields the next motor output.
However, when making the data for training, I have to make all the data of (image, motor angle) pair for all the position on the workspace. Eventhough the network can do some generalization job by itselt, the data needed is stil too much and it takes lots of time since there are too many trajectories.
Generalizing the problem I have, the time for getting training data for network is too much. Is there any way or method that can train network with small size dataset? Or making huge dataset within relatively small human intervention?
Your question is very broad and definitely encompasses more than field of study. This question cannot be answered in this platform, however, i suggest you to check out this compilation of Machine Learning Resources on gitHub, specifically Data Analysis section.
A more specific resource related to your question is DeepNeuralClassifier.
I searched more paper and I found some that are related to the subject. The main topic of my question was to
find the way to train the network efficiently with small size of dataset
find the way to make huge dataset with small human effort
There were some papers and two of them helped me a lot. This is the link.
Explanation-Based Neural Network Learning for Robot Control
Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours
I recently was introduced to the amazing world of neural networks. I've noticed their amazing flexibility and capability. However, I'm no gonna lie, my knowledge about their technicalities is sparse. The network of interest is the multilayer perceptron. It consists of some input nodes, some hidden nodes and some output nodes. However, I would like to know, do all input nodes need to be connected to all hidden nodes and all hidden nodes need to be connected to all output nodes? Or is there some determining factor to decide which input nodes should be connected to which hidden nodes which are in turn connected to which output nodes?
Your help is much appreciated :3
do all input nodes need to be connected to all hidden nodes and all
hidden nodes need to be connected to all output nodes?
Since an Multi-Layer Perceptron (MLP) is a Fully Connected Network, each node in one layer connects with a certain weight W{i,y} to every node in the following layer. See the image bellow.
Or is there some determining factor to decide which input nodes should be connected to which hidden nodes which are in turn connected to which output nodes?
You can implement pruning methods to remove some connections and observe if it improves the accurancy and performance of the neural network. Generally, it is made after you train your neural network model and you can see the performance. See these links:
A new pruning algorithm for neural network
An iterative pruning algorithm for feedforward neural networks
It also could be made by exaustive search, on other words, brute force (removing and reconnecting nodes between each layers).
I am working on a independent project. I am studying chemistry in school, along with computer science and would like to know if it is possible to model certain wave function phenomenon (schroedinger's equation, hamiltonians, eigenvalues) using Artificial Neural Networks.
My main questions are:
Would I be able to program and compute from my laptop? My laptop is a Asus Q200e
If not possible from laptop would I be able to use my desktop which contains a i5 processor and a fast GPU?
Your questions
Yes, may use your Asus Q200e to calculate your neural network.
Using a more powerful computer is always appreciative. If are willing to go the extra mile and perform the calculations on your GPU, the process will be even faster.
Applying neural networks to quatum mechanics
There is actually some litterature on how to proceed with creating such neural networks. See this link for to get a few pointers:
Artificial neural network methods in quantum mechanics
I've been reading about feed forward Artificial Neural Networks (ANN), and normally they need training to modify their weights in order to achieve the desired output. They will also always produce the same output when receiving the same input once tuned (biological networks don't necessarily).
Then I started reading about evolving neural networks. However, the evolution usually involves recombining two parents genomes into a new genome, there is no "learning" but really recombining and verifying through a fitness test.
I was thinking, the human brain manages it's own connections. It creates connections, strengthens some, and weakens others.
Is there a neural network topology that allows for this? Where the neural network, once having a bad reaction, either adjusts it's weights accordingly, and possibly creates random new connections (I'm not sure how the brain creates new connections, but even if I didn't, a random mutation chance of creating a new connection could alleviate this). A good reaction would strengthen those connections.
I believe this type of topology is known as a Turing Type B Neural Network, but I haven't seen any coded examples or papers on it.
This paper, An Adaptive Spiking Neural Network with Hebbian Learning, specifically addresses the creation of new neurons and synapses. From the introduction:
Traditional rate-based neural networks and the newer spiking neural networks have been shown to be very effective for some tasks, but they have problems with long term learning and "catastrophic forgetting." Once a network is trained to perform some task, it is difficult to adapt it to new applications. To do this properly, one can mimic processes that occur in the human brain: neurogenesis and synaptogenesis, or the birth and death of both neurons and synapses. To be effective, however, this must be accomplished while maintaining the current memories.
If you do some searching on google with the keywords 'neurogenesis artificial neural networks', or similar, you will find more articles. There is also this similar question at cogsci.stackexchange.com.
neat networks as well as cascading add their own connections/neurons to solve problems by building structures to create specific responses to stimuli