Double filtering the signal - filtering

My extracellular recorded data from Hippocampus initially had 40 KHz sampling rate. However the data that I have been given is filtered to this range: 300-9000 KHz.
Now if I filter this data again to fit within 300-9000 KHz, does it change anything?
What if I filter it to fit within 300-7500 KHz, would it be the same as if I had filtered the original data (with 40 KHz sampling rate) to have a frequency range of 300-7500 KHz?

1 - Ideally nothing changes. Your data has been filtered already with a band pass filter, so if you use the filter (same filter) again, it's suposed that nothing happens, but take in accout that digital filters have delays that could affect your data, not in amplitude, but in frecuency and/or time.
2 - No, it's not the same, all filters have a frecuency response form and if you change the fc (frecuency cuts) you do also change the response form so your data isn't going to be equal, may be not much diferent, but doing this IMO is a mistake.
For your consideration, make a couple readings. If anything isn't clear, write a comment below.

Related

How to persist previous data point when time range doesn't include a data point

TL;DR:
Can I get Grafana to show me the previous data point, when the currently selected time period does not have a data point? I have an example which sounds ridiculous, but at least it's simple to understand: I send data every 1 minute, and I wish to zoom into the last 30 seconds, and still see data. You may ask "why not just zoom out to 2 minutes" but the reason is that other data is on the same graph that has updated more often, and I wish to compare with that data. Also, for the more lengthy reasons below.
If not, how can I achieve what I want to achieve, see below?
Context
For a few years, I have been monitoring the water level in three of our basement sumps (which have pumps installed) by sending this data from Node-RED to InfluxDB, then visualising the sump levels in Grafana. I have set up three waterproof ultrasonic distance sensors, each pointed down a pipe that is inserted vertically into each sump. The water fills the pipe and the distance sensor, connected to an Arduino, sends me the reading. The Arduino also has other sensors connected (temp / humidity) and deals with distance calibrations to calculate the percent full of each sump. All this data is sent to Node-RED. In total, I am sending 4 values per sump: distance measurement in mm, percent full, temp, humidity. So that's 12 fields. Data is sent every 2 seconds, because I wished to have a reasonably high resolution to see nice curves in graphs.
Also I decided to store all this data so that I could later troubleshoot issues (we have had sewage floods resulting in water not being able to be pumped away, etc...) and design some warning systems for these issues based on data.
Storing 12 values for every 2 seconds, over the course of a number of years, takes up a lot of space (8GB).
Nature of the data
Storing this resolution of data has also helped me be able to describe the nature of the data. I will do so here.
(1) Non-meaningful NOISE (see below) - the percent-full reading goes up and down by 1 or 2 percent every couple of seconds:
(2) Meaningful DRIFT (see below) - I don't mean sensor drift, I am referring to actual water levels changing slowly over time, e.g. over 1 day or 1 week. Perhaps condensation on the walls drips down into the sump, or water evaporates from the sump, and the value can waver by a few percent over the course of a day. Each sump has slightly different characteristics.
(3) Meaningful MONITORING DATA - during wet weather, depending on rainfall amount, the sumps fill up over the course of say 30 mins to 3 hours. Then the pumps run and the water level drops again, wavers a bit, then the sumps continue to fill up. If the rain stopped, you can see a lovely curve as the water fills in progressively more slowly (see the green line below):
Solution to downsample
I know Influx has its own downsampling possibilities, however because of the nature of the data (which can hardly vary for 2 months but when it does, I really need to capture it in detail), I don't think lowering the sample rate is a great idea.
I have some understanding of digital filters (e.g. low pass etc) but have never programmed one myself. So I have written a basic filter in javascript (a Node-RED function) to filter the data in realtime as follows: only send each reading when it has changed from the previous one by x amount. (And update the previous one, when that occurs.)
This has already vastly reduced the amount of data being stored, and I can vary x to filter out noise shown in my first graph above, at the expense of resolution when the pumps run. Even if I set the x value to 2, it still vastly reduces data over long periods of dry weather.
So - onto my problem! Now data is not being logged to InfluxDB unless there is some meaningful change. Which means that when I zoom in to e.g. 15 minute timeframe of data, there is nothing to see.
Grafana does have the option of "fill (previous)" but this draws a line between points on the existing graph, rather than showing the previous data as if it hasn't changed since that point. Now my grafana dashboard looks a bit sad :(
One proposed solution is, in addition to sending "delta" data, send "summary" data, that is - send a full suite of data every 1 minute regardless of whether data changed or not. But then we get noise back again, and pointless storage.
Any other ideas?

Different length of sound files with different sampling frequencies

Im currently struggling to understand what is happening. So, I created a sound using the audiowrite function in Matlab (the sound is created using two different sounds but I dont think it matters) first with a sampling frequency of 44100 Hz, and another one, the sound file is the same but the sampling frequency is 48000 Hz. Now I'm observing that the sound produced at 44100Hz is approx. 30sec longer than the other one (48000Hz sampling). It looks like phase shifting of some sort, but I'm not sure. Any help/explanation is appreciated. I also made a amplitude/time plot for better understanding:
(I set the x axis to 350sec to see where the signal ends).
EDIT: here is the code for how I create the sound file:
[y1,F1] = audioread(cave_file); %cave and forest files are mp3 files loaded earlier both have samp.freq of 48000Hz
[y2,F2] = audioread(forest_file);
samp_freq=44100;
%samp_freq=48000;
a = max(size(y1),size(y2));
z = [[y1;zeros(abs([a(1),0]-size(y1)))],[y2;zeros(abs([a(1),0]- size(y2)))]]
audiowrite('test_sound.wav', z,samp_freq);
What is the storage format? More specifically, is the info about sampling rate and number of channels stored in file meta data? which is then used during playback.
If so, then there are 3 possibilities for this behavior:
1) The sampling rate meta data of the 44.1KHz file is incorrect, while the audio was sampled at the correct rate i.e. 44.1KHz. Because the 44.1KHz file is playing longer than 48KHz, which I'm assuming to be producing the correct sound, and playing for the correct duration, it can be concluded that the sampling rate meta data of 44.1KHz is much lesser than 44.1KHz.
Could you please check the meta data? or attach the files here so that I can try to take a look?
2) The sampling didn't happen at the correct rate, while the meta data has 44.1KHz as the sampling rate.
3) The number of channels is incorrectly stored.
In case the files are raw PCMs, then this probably the correct sampling rate and/or number of channels is not selected when playing the 44.1KHz file.
Hope this helps

change Frequency of coming data

I Have a Sensor (Gyro) that connected to my python program (with socket UDP) and send data to python console in real-time but with 200 Hz frequency.
I want to change this frequency of coming data to my console but could not find a good way to do it.
I was thinking about doing it with filters like Mean an waiting for idea?
If you want to have regular updates, use a windowing mechanism. Take the last n values and store the average. Then, discard the next two values and take the last n values again. This example would yield values with a frequency of 200 Hz/2.
If you only want to see events when changes have occured, store the last value, compare the current value with the last one and emit an event if it has changed, updating the stored value. As you're dealing with sensors (and thus, a little fuzziness), you probably want to implement a hysteresis.
You can even raise the frequency by creating extra values in between the received ones through interpolation. For a steady frequency, you would have to take care about your timing though.

How to find out carrier signal strength programmatically

Here is the code that I have used to find out carrier signal strength:
int getSignalStrength()
{
void *libHandle = dlopen("/System/Library/Frameworks/CoreTelephony.framework/CoreTelephony", RTLD_LAZY);
int (*CTGetSignalStrength)();
CTGetSignalStrength = dlsym(libHandle, "CTGetSignalStrength");
if( CTGetSignalStrength == NULL) NSLog(#"Could not find CTGetSignalStrength");
int result = CTGetSignalStrength();
dlclose(libHandle);
return result;
}
It is giving me values between 60 to 100, but when I test the signal strength in device by calling to this *3001#12345#* number it showed me as -65. Below I have attached the screenshot. Is the value coming from getSignalStrength() accurate? Then why is it returning positive values always?
getSignalStrength() is showing negated dB attenuation, e.g. -(-60) == 60. The call in is displaying it more conventionally as a measurement less than zero. Yes, these kinds of variations are typical. Even in strictly controlled conditions you can get +/- 20 decibels. One way you can fix it is to take a series of measurements over time, say every second, and keep a running list of the last 10 or so measurements. Then report the mean or median or some other statistic. In my experience this cuts down the variation a lot. You can also do standard deviation to provide a measure of reliability.
My observation on CTGetSignalStrength() in CoreTelephony is that, it returns the
RSSI(Received Signal Strength Indication) value of carrier's signal strength. My experiments with this returns values between the range 0 - 100, which I believe is the "percent signal strength" of the carrier according to this .
Also iOS does not measure signal strength in a linear manner according to this
I also noticed that, even when we have 5 bars in status bar, RSSI value may not be 100.
getSignalStrength() provides the result in negative scale. The difference can be controlled using high pass filter.
You should gather the results of last 10 observations. At the time of adding new observation, take the average of last 10 observations. Multiply it by 0.1. Take the current reading and multiply it by 0.9. Add both. If total observations are greater than 10, remove the oldest observation for next calculation.
It will make your result more reliable and you can also handle sudden changes in the signal strength effectively.
I would recommend reading up on decibel measure. The following link should help.
How to Read Signal Strength
As for many signals getSignalStrength() should be used cojointly with a high-pass filter algorithm.
Also make sure to have an average measurement over some time (10-15 successive measurements).
And it's normal that it gives negative result, dB signals for telephony are like that ;-)

Sine LUT VHDL wont simulate below 800 hz

I made a sine LUT for VHDL, using 256 elements.
Im using MIDI input, so values range 8.17Hz (note #0) to 12543.85z (note #127).
I have another LUT that calculates how many value must be sent to my 48 kHz codec in order to play the sound (the 8.17Hz frequency will need 48000/8.17 = 5870 values).
I have another LUT that contains an index factor, which is 256/num_Values, which is used to call values from the sin table (ex: 100*256/5870 = 4 (with integer rounding)).
I send this index factor to another VHDL file, which is used to calculate which value should be sent back. (ex: index = index_factor*step_counter)
When I get this index, I divide it by 100, and call sineLUT[index] to get the value that I need to generate a sine wave at the desired frequency.
The problem is, only the last 51 notes seem to work for me, and I do not know why. It seems to get stuck on a constant note at anything below that frequency (<650 hz) , and just decrease in volume every time I try to lower the note.
If you need parts of my code, let me know.
Just guessing, I suspect your step_counter isn't going through enough cycles, so your index (into the sine lut) doesn't go through a full 360 degrees for the lower frequencies.
For anything more helpful, you'll probably have to post code.
As an aside, why aren't you using something more like a conventional DDS? Analog Devices has a nice write-up on the basics: DDS Tutorial