hardware co simulation using Digilent Atlys FPGA is Slow - matlab

I'm using DIGILENT's Atlys FPGA board for image processing but i'm facing one problem that is when i do software co simulation using Black box i'm getting the output very soon i.e, within 1 min but when i generate hardware co simulation model and use for hardware co simulation the output i'm getting taking very long time 20 to 30 mins. why is this? and how to overcome this long time?

Related

Sampling rate is very low with ADS1256 and raspberry pi

I am trying to get data samples from a sensor using a ADS1256 library with a Raspberry Pi High-Precision AD/DA Expansion Board on my RaspberryPi 2B
Now as mentioned in their code and datasheet it can take around 30,000 samples per second, but when I am running it, it was taking around 15 samples per second. After some modifications in code, it is taking around 470 samples per second.
I need atleast 1000-1500 samples per second.
Here again is the link to the ADS1256 code.
I tried to use this at a higher rate of speed. If you are waiting for DRDY pin to go low for on the order of milliseconds it isn't going to work. I had no luck in modifying the software. I tried to use this http://abyz.me.uk/lg/lgpio.html#lguSleep but I never could get the interrupt to activate on the change of DRDY. It seems that the person who wrote the sample program for the ADS1256 could not either. I looked at the sample program for the mpc3202. http://abyz.me.uk/lg/lgpio.html#lguSleep He does similar things; He sleeps for .2s between samples. That won't work for his sample rate. One problem is that the raspberry pi has no real-time clock. I tried some unix time routines and got back 0 as a result.

Labview FPGA Simulation Timing

This is a very basic question. I can't simulate a PWM file, in system time, from its FPGA VI file.
Details
For a NI cRIO-9067 + LabVIEW 2016 + Windows 8 system, under FPGA Interface Mode, I have the Test VI No.1.vi NI LabVIEW file and the corresponding FPGA Desktop Execution Node block file Test VI No.1 DEN.vi as suggested in the Getting Started information [1] [2].
In both files, the Low Pulse and High Pulse Numeric Controls are filled with the 1000 value. The Loop Timer block is set as "mSec" Counter Unit and "32 Bit" Size of Internal Counter.
The compiled FPGA version of the first file executes a square wave changing each 1 second, as expected, after 7 minutes of local compilation.
Under Simulation (Simulated I/O) as Execution Mode, and for reproducing approximatedly and by trial and error the square wave timing every 1 second, I need to put the value 1750 in the Clock Ticks field, from the FPGA 40MHz Onboard Clock reference clock, shown in the block options.
I dont understand this block, and why i should not put any close divisor of 40,000,000 at the Clock Ticks field, or simply, the value 1. Basically i dont understand how to "time" these FPGA simulations.
The desktop execution node is designed for time based simulation you are definately on the right track.
What you are setting at the top is the number of cycles that are executed each time you call the node. In your case you have 1750 ticks so around 43.75us of simulated time per iteration.
To simulate in real time you need to make sure that you execute the same amount of simulated time as the simulation loop takes to run. In your case, you have no timing in your simulation loop so why 1750 works for you is because that is probably how long that loop takes to execute.
If you put a loop timer in of 1ms and set the clock ticks to 40,000 (1ms simulated time) then I think you will find that it also works.
In some cases it may be beneficial to execute faster than real time so you would just have to account for that in your maths. For example if you set the clock ticks to 40 (1us simulated time) then you can count the number of iterations and multiply by 1us to get the actual clock time.

Dymola/Modelica real-time simulation advances too fast

I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.

RaspberryPi cpu temperatures spikes

I've got my Rasp with the latest firmware update and I´m doing SoC temperature reads every 5minutes (300seconds). I came across several temperature spikes ( from 50º to 70º and sometimes to down to 30ºC). These temperature spikes happen 2-3 sometimes 4 times everyday.
I've read there are some readout glitches on the temperature sensor, yet can this be a hardware malfunction? Maybey a software update (linux) will solve the problem?
I'm currently using it as a small homeserver, but if temperature spikes persist I'd probably have to get a passive cooler.
I'm also sure that during those upper spikes the workload of cpu/IO/gpu is kept the same.
It´s not exactly a solution but it solved my problem. I changed the SD card (8gb generic class 4) to a micro-sd adapter+sandisk 16gb class 10, installed the same raspbian (in fact the same image file!), also did the system updates and after 24hours running didn't noticed any spikes.
Maybee an SD card write/problem?
The sd card wasn't full by the way.

Running a filter at a high speed

I'm writing a signal processing software in CVI.
I've got a signal, transmitted to the computer via USB at a very high speed (~50K).
I want to filter it in RT.
In order to do it I created a filter in Simulink and turned it into a C code, which I run in CVI using:
FuncName_initialize()
FuncName.in
FuncName_step()
FuncName.Out
The thing is that after a while (about 5-7 min) the filter works wrongly... Meaning showing inaccurate results and artifacts. I believe this is a result of using it too fast (because I used it before at lower speeds and this was fine).
Any suggestions on what might be the problem? How can I implement a RT filter in CVI directly (meaning a one that get one point at the input and gets one point in the output while maintaining some window).
I know that the data transmitted just fine at this speed since recording the signal works OK, and showing the raw data on screen works OK as well.
Thank you