I am using a DataQ acquisition device in Matlab 32-bit with the Data Acquisition toolbox.
On occasion, when I have my sample rate set to 300, it tells me:
Warning: This hardware could not support the requested value of 300
for SampleRate. SampleRate has been set to 1000"
However, if I set SampleRate to 1000, it sometimes sets it back to 300 with the same error message.
Also, if I set the program so that after the error displays and the device has started recording it returns the SampleRate, this is always at whatever I set it to, not what the program claims it changed it to.
Anyone have any idea how I find out what the actual sample rate was or keep it from resetting mine? I need to know how many samples there are per second for further calculations.
The problem is not with Matlab but with the DAQ. I have a similar "problem" with a NI DAQ. The hardware is set to sample at a very high rate to avoid aliasing. You could sample at a higher rate than required and then use the Matlab command "resample" to reduce your sampling rate. Resample will avoid any aliasing of higher frequencies.
Related
I am using flutter_sound package to record audio from mic. It provides data in stream of Uint8List. So how can I calculate amplitude from it. I have found many answers in other language but I was having hard time interpreting it into dart.
for reference,
Reading in a WAV file and calculating RMS
Detect silence when recording
how can i translate byte[] buffer to amplitude level
if anyone can interpret this into dart so that I can calculate amplitude
I was not able to calculate amplitude from Uint8List so what I did was write my own native code to directly get amplitude from native you can check out my package here. (I am busy somewhere else so I'm not right now maintaining it but will surely do in future.)
And I don't know but flutter_sound provides maybe false values or I was calculating wrong but I was always getting nearly the same values even when the volume is high or low.
Now, if you want to calculate amplitude then it should be based on values you're getting,
If you are getting values between -32768 and 32767 then from my understanding what you do is define a timeframe and for that timeframe calculate the RMS of it.
The RMS is your amplitude.
Now, as #richard said in the comment to normalize with 32768 if you have Int16List. But I guess that is doing too much work. We get a stream of Int16List so data may be coming every 200ms or whatever it is set at the plugin level. Calculating RMS for each Int16List is a very intensive task so I don't think it's good to do it on the flutter side. Get data from native directly and just pass it flutter using platform channels.
Now, if you want to try then calculate the RMS of an Int16List then for a defined timeframe take the mean of those values.
I may well be wrong but hope this helps or guides you in the right direction. If anyone finds a better way please post it here.
I have created an OFDM signal (WLAN) in MATLAB and saved the complex valued signal in a file. I then transfer the file to USRP-B210 (Software Defined Radio) and transmit over the air. Note that in MATLAB, I do not require to up-convert the signal to IF(intermediate frequency), as it is done by the radio device itself.
The signal is received by the receiver (which down-converts) and saves the file in local disk. I then process the receive file in MATLAB. Everything works perfect, and data is decoded correctly.
Now, I want to apply a Rayleigh Channel (that has Delay-Doppler effect over multipath channel). MATLAB has built in function for that Fading Channel. In such case, my signal transfer works fine with minor degradation in performance (BER). But degrades as I increase Delay-Doppler effect.
My question is, while I use the channel model, I am not doing any upconversion of the baseband signal before passing it to the channel model function. Was that necessary ? Because applying channel model on the base band sounds incorrect. In that case what other options do I have ?
I am not super familiar with this toolbox from MATLAB, but a quick look at the help page for comm.RayleighChannel doesn't seem to show any inputs that are carrier frequency based. This would imply that the channel model is carrier independent. As such, it should be able to be applied to a complex baseband signal just as readily as to an upconverted pass-band signal.
rayChan =
comm.RayleighChannel with properties:
SampleRate: 100000
PathDelays: 0
AveragePathGains: 0
NormalizePathGains: true
MaximumDopplerShift: 130
DopplerSpectrum: [1x1 struct]
ChannelFiltering: true
PathGainsOutputPort: false
I am trying to use matlab for data acquisition with a licor820 instrument. The instrument outputs data at 2 hertz.
I have tried many different methods using infinite loops with asynchronous sampling (readasync) and timed readings but I am unable to get 2 hertz data. I am getting reads in the .51 s range. here are three examples of my methods. Any advice on what I may be doing wrong or how to properly sample at the highest frequency would be greatly appreciated!
example1: using readasync
tinit=tic; %initialization timer
s=serial('COM4') %,'InputBufferSize',40);
fopen(s)
while toc(tinit)<2 %allow time to initialize
end
while 1<2 %infinite loop for continuous sampling
readasync(s)
data=fscanf(s)
toc %allows me to see time between data acquisitions
tic
end
example 2: using bytes available.
My thinking here is to acquire data when I have the minimum amount of bytes necessary. Although I am unsure exactly how to determine how many bytes are necessary for my instrument, besides through visually looking at the data and narrowing it down to around 40 bytes:
while 1<2 %infinite loop for continuous sampling
if s.BytesAvailable >35
scandata=fscanf(s);
toc
tic
end
end
example 3: time forcing.
Since I need 2 hertz data my thinking here was to just force read the buffer every .49 seconds. The weird thing I see here is that it initially provides samples every .49 seconds, but while I monitor the bytes available at the port I see it steady dropping from 512 until it gets to 0 and then I stop getting .49 second samples. I guess I don't really understand how to use serial efficiently.
while 1<2 %infinite loop
if toc(t2)>=.49 %only sample after .49 seconds have passed
t2=tic; %reinitiate the tic for this forced time loop
bytes=s.BytesAvailable %to monitor how many bytes there are at the port
scandata=fscanf(s);
if ~isempty(scandata) && length(scandata)== 3 %checks for successful read
toc
tic
end
end
end
I feel there must be some way to sample completely in sync with the an instrument but I can't figure it out. Any help, suggestions, or ideas would be greatly appreciated! Thanks!
Dont rely on tic and toc. These functions use the time supplied by the OS calls. Mathworks claims to use high resolution timers, but do not rely on this! If you do not use a realtime OS these measurements are subject to unknown variation.
Sampling should be performed by realtime capable hardware. In your case I suspect that your sampling rate is actually controlled by your instrument. The output of the instrument is buffered by your serial interface. Therefore it seems to me that Matlab does not influence the sampling rate at all. (As long as the buffer does not overflow)
Try to acquire about 2000 samples or more and see how long it takes. Then divide the total time by the number of samples (-1) and compare this to the expected 0.5 s. If there is a difference, try adjusting the configuration of your instrument.
I am working on a project in matlab to take a predetermined audio file and change the sample rate dynamically from data generated in real time. I have hit a very stubborn roadblock with the dsp.audioplayer object. It doesn't allow change in either the sample rate or the sample size once it's state is locked. My thoughts right now are to vary the sample size that I pull from the wav file and scale it using a fir rate conversion filter. Is this an option worth perusing? Are there any other ways around this problem?
In the latest MATLAB release samplerate is tunable in dsp.audioplayer. Tunable Means you can change the property value after the object is locked.
Your workaround is good when this is not possible.
analyzing the deviceMotion.timestamp i saw that the the update frequency set in DeviceMotion is not the actual frequency of update.
I implemented an app in order to test, below what I saw!
update frequency actual frequency average time between two calls
1/10.000000 10.232265 0.097730
1/20.000000 19.533729 0.051194
1/30.000000 30.696613 0.032577
1/40.000000 42.975122 0.023269
1/50.000000 53.711000 0.018618
1/60.000000 53.719106 0.018615
1/70.000000 71.627016 0.013961
1/80.000000 71.627263 0.013961
1/90.000000 53.719365 0.018615
1/100.000000 107.442667 0.009307
1/110.000000 107.437022 0.009308
someone has noticed the same thing?
it's a bug?
Some people are reporting the same phenomenon, for example Actual frequency of device motion updates lower than expected, but scales up with setting but there is still no answer. Surprisingly you are the first on to report higher actual frequencies. I did several tests on this and it makes no real difference which way you go.
Push or pull i.e. handler callback or own timer loop
iOS 4.2x, iOS 4.3x ([Update:]tested with pull only)
Raw sensor data or Device Motion
Gyroscope or accelerometer
Running it in a separate thread
I assume it is a little bug in the Core Motion framework.