Simulink simulating audio extremely slow - matlab

I've creating following model:
The simulation works as expected, but the speed of it is immensely slow. When listening to the output, it's just a small noise every once in a while, at the T indicator barely increases by 0.005/s. I understand the software has to run audio samples through the algorithm constantly, but the simulation being this slow makes me concern about when I have to use it in practice, as it then has to be used on a Microphone on a Raspberry Pi in real-time.
Am I using some bad blocks, is my signal set-up in a bad manner, what can I do to increase the performance of the system?
EDIT - Info from my profiler:

Related

How do I determine processor speed required for optical flow?

I'd like to use an optical flow system to get velocities from surrounding environment. I've read papers about how optical flow works, but they don't treat details about optic sensors.
My question is: How do I determine how much computational power is required to perform optical flow analysis?
I'd like to use a low-power system (like microcontrollers), but I don't know what kind of camera I could use with such a system. I mean, could it be color or does it need to be B/W? Rolling shutter or global shutter? Which frame rate or number of pixels?
I'd like to specify the system myself but, without knowing how those camera attributes impact the processing load, I'm not sure where to start.
As Chuck already said in the comment. You first need to start with something. Opticalflow calculation really depends on what you are using it for and what you are trying to achieve. For realtime applications you might want to consider using faster processors (this is always true though).
Continuing to my answer.
Opticalflow calculation performance depends on few main things:
The optical-flow method you choose (dense or sparse), you can read more about it here and here. Of course that you should take into account not only that sparse is faster than dense, also that sparse might be less accurate in some cases. Again, this depends on what you're trying to achieve.
In addition, you will see that there are different optical-flow algorithms. Some might be faster than others. There are many algorithms such as Lucas-Kanade, Horn-Schunck, TVL1, Farneback, etc.
Most optical-flow methods from libraries such as OpenCV gives you the ability to change some parameters in order to play with the trade-off between accuracy and performance. See this and also check the OpenCV methods such as this and this for example - see the different arguments.
The resolution of your image. Smaller image usually means faster calculation.
Few things you might also want to consider:
If you are using a processor that has multiple cores, make sure that you are using all the cores in the optical-flow calculation. Some libraries may already do this for you, but in some cases you will need to do it by yourself. Take a look at my question and answer in this post, it might give you some idea and help you getting starting with such case.
If you want more accurate optical-flow results you must use global shutter camera. Rolling shutter cameras, such as most of the web-cams, will give you an extra error you don't want.
You don't need color image, if you have a grayscale camera it will be even better. If not, you will need to convert it to grayscale (not B/W) for faster performance as well.
Some libraries such as OpenCV has an option (in some cases) to run these algorithms on a GPU. If using a GPU is an option you might want to consider this as well.
From my own experience, the main thing that gave me a boost in performance was changing my resolution from 640x480 to 320x240 and even 160x120. In my case it didn't really hurt the accuracy.
I used an Odroid U3 mini-pc with OpenCV PyrLK algorithm and input frames of 320x240 resolution. After applying what's described here (splitting the image to 4 for parallel calculation) it worked pretty well (realtime).
The answer given by Sarid has some strong points, and many of them are shared by researchers around the world. My opinions are shared by anyone who has actually worked with these topics in the real-world setting.... with real world, i mean implementing optical flow in drones, on mobile phones and IP cameras that are not sitting in a protected office, and where other systems (such as humans) need to interact and be co-dependent.
First of all, depending on your problem, you may want to invest time in looking for ready-made solutions. Optical flow sensors are readily available, cheap and robust (but usually not strong in accuracy). These are the kind of sensors you find in optical mice. They are low power, and easily interfaced with micro-controllers. Some have staggering sample rates of thousands of fps. They commonly have low spatial resolution however, and (to emphasize) high robustness but low accuracy.
If instead you are looking for the kind of optical flow that can be used for shape from motion, pedestrian detection and video-encoding, for example, then you are probably better off to look for something more advanced, and thats where Sarids answer becomes relevant.
Since your question has been migrated from robotics stack exchange, I am going to assume you are interested applications close to machine control and human machine interaction. In that case, the most important aspects are the ones usually most ignored by people working in the field of optical flow estimation, namely:
Latency. If you have a human interfacing at the front-end... then the common term is "glass-to-glass latency". This is completely different from the fps of your system, which is connected to throughput. If you find that you are in a discussion with someone, and they do not understand the difference between latency and fps, then they are not the expert you are interested in. For example, almost all researchers in computer vision who do GPU implementations of optical flow add massive latency by allowing for frame delays and ineffecient memory handling (inefficient from perspective of latency, but efficient in terms of throughput and hard-ware utilization). Consider the problem of controlling a drone, say make it self-stabilizing, it is better to receive a bad optical flow estimation 10 ms earlier, then a good one with 10 ms extra delay.... especially if the optical system does not give you any upper bounds of the delay for any given time.
Algorithm stability. This is completely different from accuracy. Accuracy is what 99% of all research in optical flow has been obsessing about for the last 30 years. Stability is not at all something evaluated in the Middlebury benchmark for example. Stability deals with how small changes in your data will guarantee small changes in the estimated optical flow. While some good work has been done in the community (on robust statistics most interestingly) in the end the final evaluation of any algortihm disregards stability. Consider the optical mouse as a good example. The first generations of optical mice had higher accuracy (the average error from the true motion was smaller) but they had lower stability (especially when you ran the mice over "bad textures", with rotational motions). Later generations of optical mouse have worse accuracy, but are focusing on the stability, as that is the most important thing. You dont experience the mouse cursor jumping around as much as you did the earlier days of the devices.... but if you move the mouse on your mat, left and right repeatedly, you will see the cursor slowly drifting (i.e. low accuracy).
Heat. Any device that will estimate high accuracy optical flow, will require lots of computations. When it comes to computations per watt, GPUs are not that good. In drones, you may be able to get away with this, because it is a setting where you have active cooling as a by-product of the propulsion system. In the real-world, you most often can not assume active cooling nor unlimited power supply.
To conclude, its a fascinating area, and I hope you have a great experience coding solutions.

Problems in reinforcement learning: bug, parameters tuning, and training period

I am currently training a reinforcement learning agent using a simple Neural Network with 100 hidden elements to solve 2048 game. I am using DQN's reinforcement learning algorithm (i.e. Q-learning with replay memory), but with 2 layers Neural Network instead of Deep Neural Network.
However, I left it trained on my laptop overnight (~7 hours, ~1000 games played, > 100000 steps) and the score does not seem to increase. I suspect there might be 3 sources of errors in my code: bug, parameters tuned badly, or maybe I just don't wait long enough.
Is there any method to figure out what is wrong with the code?
And what is the best practice to improve the training results?
I'll talk about all three of your hypothesis.
If you are using a standard DL framework like caffe or tensorflow, the chance of it being a bug is small.
Try decreasing the learning rate. Maybe you set it too high for the network to converge.
The training time of 100000 steps is not that long. For a simple pong game, you need to train around 500000 steps to get a good accuracy. So you can try training it for longer.
Also, 2048 is a fairly complicated game, so maybe you network is not deep enough to learn how to play it. Two layers is not much for such a complicated game. Try increasing the number of hidden layers. Perhaps you can use the network provided here

QLearning usage on a repetitive simulation

I am using Q-Learning algorithm on a simulation. this simulation has limited iterations (600 to 700). the learning process is activated for several runs of this simulation (100 run).
I am new to reinforcement learning, and i have an issue here about how to use exploration/exploitation on such kind of simulation (I am using e-greedy exploration).
I am using a decreasing exploration and I am wondering if I should use the decreasing exploration on the whole simulation runs, or decrease it for each simulation run (initiate epsilon to 0.9 for each simulation run and then decrease it).
Thank you
You won’t need such a high initiation of the epsilon. It might be better to initialize the q-values as very high, so that unknown q-values are always picked above q-values that has been explored at least once.
Considering your state space, it doesn’t matter whether you decrease it after a whole run or an individual run, but individually sounds like a better option.
How fast you decrease it will also depend on the circumstances of the world and how fast the agent learns. I’m trying to make my alpha and epsilon correlate to the error, but it’s tricky to do that.

quadrotor accelerometer unstable

I am currently working on my project quadrotor. I am using ADXL335 accelerometer and L3G4200D gyroscope interfaced with an atemga 128. When I check reading from accelerometer without running motors, values are accurate and stable. But when I start motors, values start to fluctuate. The more I increase the speed the more they fluctuate. I tried Kalman filter, the results seem same just less fluctuating but still not enough for a stable flight. My gyroscope readings also give too much drift. Is this suppose to happen or am I doing something wrong.
It is quite difficult to help you as "fluctuations" can be caused by several things.
Just checked out the datasheet of the ADXL335 accelerometer. Have you added the bandwidth limiting capasitors on the output Cx,Cy and CZ. If not, they might help you in reducing the fluctuations.
Another thing that might cause your fluctuations interference from the motor cables to the signal cables.
If you havent used screened/shielded cables for the accelerometer change them and make sure that you try to reduce whatever interference you might find.
You might find some hints for how to do good emc design Here
From your statement, I would assume that the motors would be causing the interference. The way I see it, it could be caused one of two ways:
The PCB was custom designed, houses both power electronics and the sensitive measurement units and not enough care was taken to isolate the sensitive parts from the interference generated on the board.
The magnetic field from the motors are causing the fluctuations. This could be because the IMU is too close to the motors, improperly isolated or improperly positionned. Try avoiding installing the IMU on the same plane as the motors. We currently have our IMU placed about 20 cm above the center of gravity of our drone. I cannot confirm that it caused the accelerometer to fluctuate, but it did have enormous influences on the compass when it was placed only 10 cm above the center of gravity.

can simulation stop time in simulink be compared with real time?

what time units does this simulation stop time use? is it seconds or msec? or is there any method to measure this time as some time I feel 1 unit of this time is not of constant length?
It's seconds. But Simulink does not run in real-time, so one second of simulation time can a lot less than a second of real-time (if your model runs very fast) or a lot more (if your model runs very slow).
If your model runs "too fast", you can use utilities such as Simulink Block for Real Time Execution, Simulink® Real Time Execution, Real-Time Blockset 7.1 for Simulink, Real-Time Pacer for Simulink or RTsync Blockset (there are plenty to choose from) to slow it down to real-time.