I am using a RSPI Zero W to record videos using motion detection. I have read others doing this easily with frame rates of 5 per second and above. I am only trying to get 2 frames per second.
Everything works fine, except the videos recorded do not keep up with real time. I still get 2 frames per second, but many of the frames are duplicates.
It seems that the PI0W can not keep up with it, but the CPU usage never gets much higher than 50%. Using a Class 10 SD card so I do not think it is a disk I/O problem.
I originally did this on a PI2B with no problems in 720p. I tried reducing the resolution to 640x480 and get the same result.
Any suggestions?
Setup:
Raspberry Pi Zero W
Raspbian 9.4 stretch
Apache/2.4.25 (Raspbian)
PHP 7.0.27-0+deb9u1
Samba Version 4.5.12-Debian
Motion 4.0 (installed via apt-get)
motion.conf
https://pastebin.com/Mr438ned
Example Video: Watch the timestamp. It should be updating every second
Related
I have four DS18B20 temperature sensors connected to my Raspberry Pi. I use 1wire and a pull-up resistor.
I read the values directly via cat from the 1wire devices and put the velues without calculation into a gnuplot data file.
The setup has been running fine for weeks now, measuring a coolbox at different places in a range between about 0°C and 30°C. I got quite accurate readings and plots.
But suddenly the values of all sensors start to "flutter", the became unstable. They also dropped -- all four -- about quarter of a °C. The flutter is about about 0.1°C to 0.2°C. Two of the sensors are actually inside fluid (0.5l and 25l) and thus it is virtually impossible that they suddenly drop or flutter.
The time of the event coincides with me checking the cooler box. I might have shifted or touched some sensoring parts. But can that explain the temperature shift? What happend? How can I fix it?
It seems that a cause of the problem could be resolution being lowered. This is a (volatile) setting stored in the sensor itself. It can be set to 9, 10, 11 or 12 bits. The higher the resolution the more precise the measurements will be, but at a cost of longer measurement time.
According to DS18B20 datasheet, by default the resolution is set to 12 bits after power up. In addition, the drivers handling 1-wire communication with the sensor often also by default set highest possible resolution during startup. This could explain why rebooting fixed the problem in case of OP, but doesn't explain why the change in resolution happened in the first place. That likely depends on the specific setup and will likely have to be resolved on a case-by-case basis.
Furthermore - to confirm that the measurements are indeed done at lower resolution, one can get the numeric values of the samples and check the minimum value by which the measurements change. For example, for 12bit resolution the minimum delta is 0.0625 degrees, while for 9bit resolution it may only change by 0.5 degree and nothing in between.
I am trying to get data samples from a sensor using a ADS1256 library with a Raspberry Pi High-Precision AD/DA Expansion Board on my RaspberryPi 2B
Now as mentioned in their code and datasheet it can take around 30,000 samples per second, but when I am running it, it was taking around 15 samples per second. After some modifications in code, it is taking around 470 samples per second.
I need atleast 1000-1500 samples per second.
Here again is the link to the ADS1256 code.
I tried to use this at a higher rate of speed. If you are waiting for DRDY pin to go low for on the order of milliseconds it isn't going to work. I had no luck in modifying the software. I tried to use this http://abyz.me.uk/lg/lgpio.html#lguSleep but I never could get the interrupt to activate on the change of DRDY. It seems that the person who wrote the sample program for the ADS1256 could not either. I looked at the sample program for the mpc3202. http://abyz.me.uk/lg/lgpio.html#lguSleep He does similar things; He sleeps for .2s between samples. That won't work for his sample rate. One problem is that the raspberry pi has no real-time clock. I tried some unix time routines and got back 0 as a result.
I have been following a tutorial that enables you to play around with the TXPOWER parameter of your wifi card / wifi adapter:
http://null-byte.wonderhowto.com/how-to/set-your-wi-fi-cards-tx-power-higher-than-30-dbm-0149606/
You can easily boost up your wifi range when increasing the TXPOWER.
Now, most people want to improve their wifi signal strength of their home router, right. But in my case, I would like my home router (which runs on a raspberry pi) to have a relative small wifi signal radius (say, a radius of 2 meters), so that you actually need to physically look for the pi home router when trying to connect to it.
I have learned that this tutorial does not do a thing with the wifi link quality and/or the wifi signal level and thus does not influence the wifi radius of my pi home router.
link quality & signal level
Do you guys have any ideas/thoughts about how to decrease link quality and/or wifi signal level (e.g Link Quality = 12/70 and Signal level =-10dBm) ? Is this even possible ?
I am using a Tp-Link TL-WN722N IEEE 802.11n USB - Wi-Fi Adapter.
WIRELESS LITE N ADAPTER 150M USB HIGH GAIN 1DETACHABLE ANTENNA WL-AP.
150 Mbps - External
First, I recommend reviewing this section from your link:
QUICK DECIBEL UNDERSTANDING:
Every 10 decibels is a 10X increase in power starting from 1 dBm equal
to 1mW... 10 dBm equals 10 mW, 20 dBm equals 100 mW, 30 dBm equals
1000 mW, and so on. Every 3 decibels is approximately double that of
the prior power, so 30 dBm is 1000 mW, if we add 3 dBm, then we can
double the power such that 33 dBm is about equal to 2000 mW.
It appears to me that you are able to modify the transmit power of your adapter as the tutorial states. Are you saying this is not working? If you set your transmit power to something extremely low (-30dBm, for example) you would effectively be turning off the transmitter. Keep increasing that value until you get your desired coverage radius.
If the transmit power parameter is not functioning as per the tutorial, then there are other means to achieve reduced coverage. The model you specified has a detachable antenna....so detach it. This would definitely reduce your coverage. However, if it reduces coverage too much, you could simply add an inline attenuator. Fortunately, your antenna uses an SMA connector which is very common. You can find many SMA attenuators on ebay with different attenuation values. Experiment with different values until you get the desired coverage.
And if that doesn't work, just wrap a bunch of aluminum foil around the thing lol.
I've got my Rasp with the latest firmware update and I´m doing SoC temperature reads every 5minutes (300seconds). I came across several temperature spikes ( from 50º to 70º and sometimes to down to 30ºC). These temperature spikes happen 2-3 sometimes 4 times everyday.
I've read there are some readout glitches on the temperature sensor, yet can this be a hardware malfunction? Maybey a software update (linux) will solve the problem?
I'm currently using it as a small homeserver, but if temperature spikes persist I'd probably have to get a passive cooler.
I'm also sure that during those upper spikes the workload of cpu/IO/gpu is kept the same.
It´s not exactly a solution but it solved my problem. I changed the SD card (8gb generic class 4) to a micro-sd adapter+sandisk 16gb class 10, installed the same raspbian (in fact the same image file!), also did the system updates and after 24hours running didn't noticed any spikes.
Maybee an SD card write/problem?
The sd card wasn't full by the way.
I'm running a test to measure the basic latency of my iPhone app, and the result was disappointing: 50ms for a play-through test app. The app just picks up mic input and plays it out using the same render callback, no other audio units or processing involved. Therefore, the results seemed too bad for such a basic scenario. I need some pointers to see if the result makes sense or I had design flaws in my test.
The basic idea of the test was to have three roles:
My finger snap as the reference sound source.
A simple iOS play-thru app (using built-in mic) as the first
listener to #1.
A Mac (with a USB mic and Audacity) as the second listener to #1 and
the only listener to the iOS output (through a speaker connected via
iOS headphone jack).
Then, with Audacity in recording mode, the Mac would pick up both the sound from my fingers and its "clone" from the iOS speaker in close range. Finally I simply visually observe the waveform in Audacity's recorded track and measure the time interval between the peaks of the two recorded snaps.
This was by no means a super accurate measurement, but at least the innate latency of the Mac recording pipeline should have been cancelled out this way. So that the error should mainly come from the peak distance measurement, which I assume should be much smaller than the audio pipeline latency and can be ignored.
I was expecting 20ms or lower latency, but clearly the result gave me 50~60ms.
My ASBD uses kAudioFormatFlagsCanonical and kAudioFormatLinearPCM as format.
50 mS is about 4 mS more than the duration of 2 audio buffers (one output, one input) of size 1024 at a sample rate of 44.1 kHz.
17 mS is around 5 mS more than the duration of 2 buffers of length 256.
So it looks like the iOS audio latency is around 5 mS plus the duration of the two buffers (the audio output buffer duration plus the time it takes to fill the input buffer) ... on your particular iOS device.
A few iOS devices may support even shorter audio buffer sizes of 128 samples.
You can use core audio and set up the audio session to have a very low latency.
You can set the buffer size to be smaller using AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,...
Using smaller buffers causes the audio callback to happen more often while grabbing smaller chunks of audio. Keep in mind that this is merely a suggestion to the audio system. iOS will use a callback time suitable value based on your sample rate and integer powers of 2.
Once you set the buffer duration, you can get the actual buffer duration that the system will use using AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareIOBufferDuration,...
I'll summarize Paul R's comments as the answer, which has solved my problem:
50 ms corresponds to a total buffer size of around 2048 at a 44.1 kHz sample rate, which doesn't seem unreasonable given that you have both a record and a playback path.
I don't know that the buffer size is 2048, and there may be more than one buffer in your record-playback loopback test, but it seems that the effective total buffer size in you test is probably of the order of 2048, which doesn't seem unreasonable. Of course if you're only interested in record latency, as the title of your question suggests, then you'll need to find a way to tease that out separately from playback latency.