FPGA, reloadable, programmable FIR or IIR filter - filtering

I need help with FIR Filter. Standard filter in Vivado (FIR compiler) with fixed coefficients works. But I need to be able to change the coefficients. FIR compiler allows it, but do not know how to program (I tried to write something, but I do not know if it's good). I also tried FIR filter FDATool with a processor interface. It also works somehow (filter), but I do not know how to send and timing coefficients (code that generates coefficients I have.). I need help with it quickly. If anyone know how to set it up so I'll be very happy. Searching and testing, I spent a lot of time, but the demonstration of how to set FIR filter (in c) I spotted. I've been pretty desperate. I do not know what to do. Anyone familiar with this topic and would help me? I was very grateful.
How can I do it?
Thomas.

Related

Accelerometer Reading

I am using an STM accelerometer with STM controller.
While getting the readings from the accelerometer, it gives a random value. It even shows a false value, when used in a steady position.
Here, somehow, I am facing problem and need some idea and suggestion to solve this issue. If some one has the document and sample code for it, please, let me know and help me.
There are several potential issues. The accelerometer could need to be calibrated. Often they will not start at zero when at rest straight from the factory (I have used other accelerometers but not the particular one you are using so I can't say if they are pre-calibrated).
When you say "false values" depending on the magnitude this could also be noise. Accelerometers are prone to being noisy, so you ideally you would want to low-pass filter the data you collect to reduce noise. The cutoff frequency you select depends on the particular application and your sampling rate.

Achieve filter response similar to Matlab filtfilt in realtime?

I have face a problem for last few months where solution to it seems like a dream to me! :D
Work i followed...
Initially i high pass a certain signal to remove dc shift of it using filtfilt function of matlab. The results were as intended.
But when i implement the same high pass filter in real time with matlab fdatool (butterworth filter with order2) filter response changes dramatically.
Seems like filtfilt function implementation is perfect possibly due to zero phase.
It's also important to notice phase plays a important in my filtering process?
Can anyone help me to design a real time filter which gives matlab filtfilt performance?

Frequency detection on iPhone

One part of an app I'm currently working on will work as a tuner. I want to be able to use the iPhone to display the peak frequency of a signal given by the user. I have used the SCListener which worked very good on the iPhone simulator. However when I tried it on a real device it didn't.
Forums suggests that I use apple FFT and accelerate Framework to do this but it seems overly complicated. I would really appreciate if anyone that has programmed a tuner or similar could point me in a good direction!
Thanks!
There is a related post on dsp.stackexchange. It suggests that autocorrelation will work better than FFT at finding the fundamental, if the fundamental is lower in amplitude than the harmonics. Autocorrelation is slightly less tricky than FFT. The accelerate framework will come to your help there again for that. However this is not the case usually.
I don't know of any out of the box solutions which will do all the work for you. The vDSP Programming Guide has specific worked examples for real FFTs which you might want to look into, it takes some getting used to, but it's worth it really. FFT seems like the most logical first step in peak frequency extraction I'm afraid. Most sources seem also to suggest that applying a windowing function to the time domain signal before running the FFT is critical (or you will get high frequency artifacts because of discontinuities at the extremities).
Also you might want to check out this related SO post.
Peak frequency is often different from the pitch frequency that one would want a (music) tuner to estimate. Look up pitch estimation.
From previous experience doing this :
FFT isn't as always as accurate as you might think, and is computationally expensive
Autocorrelation gives pretty good results
If you have a strong fundamental, zero-crossing can be very accurate and is very computationally efficient (just count the number of times the signal crosses zero over a period of time, f = (2 x time period in seconds)/(number of zero crossings)\
Hope that helps.
Thanks for all the answers! I had missed a part in my code to make the SC listener work on the device as well but are now trying to change it for Apples own AVAudioRecorder since it is suppose to be a lot faster. The problem was that the cocos2d framework blocked the recording of sounds until you called for a method that allowed this. It works like a charm now! :)
Thanks again!

Applying a Band-pass filter on the iOS

I am developing an application where I need it to analyze the incoming frequency with the built-in microphone on the iphone/ipad. I know that I need to use FFT and I have found a framework that can help me on that. My only concern was is there is a code or framework that includes Band-Pass filtering? Suggestions are welcome.
EDIT
Pardon my ignorance. I previously posted that I wanted to use just a Band-Pass equation, when I found out that Band-pass is both Low & High Pass filters. I still welcome suggestions.
You can always do this yourself using a biquad filter.
Here's a great document explaining how they work and what coefficients you need to plug in to create a bandpass filter: http://musicweb.ucsd.edu/~tre/biquad.pdf
On iOS 4.x, there is the built-in Accelerate vDSP framework for FFT and convolution. But unless you want to build on top of the FFT or convolution routines, there is nothing built-in for band-pass filtering. Fast convolution filtering using an FFT for overlap add/save can be very efficient, depending on your filter kernel requirements and the signal length.

How does the cross-entropy error function work in an ordinary back-propagation algorithm?

I'm working on a feed-forward backpropagation network in C++ but cannot seem to make it work properly. The network I'm basing mine on is using the cross-entropy error function. However, I'm not very familiar with it and even though I'm trying to look it up I'm still not sure. Sometimes it seems easy, sometimes difficult. The network will solve a multinomial classification problem and as far as I understand, the cross-entropy error function is suitable for these cases.
Someone that knows how it works?
Ah yes, good 'ole backpropagation. The joy of it is that it doesn't really matter (implementation wise) what error function you use, so long as it differentiable. Once you know how to calculate the cross entropy for each output unit (see the wiki article), you simply take the partial derivative of that function to find the weights for the hidden layer, and once again for the input layer.
However, if your question isn't about implementation, but rather about training difficulties, then you have your work cut out for you. Different error functions are good at different things (best to just reason it out based on the error function's definition) and this problem is compounded by other parameters like learning rates.
Hope that helps, let me know if you need any other info; your question was a lil vague...