I have been controlling Arduino from Matlab using ArduinoIO-Matlab interface. My current setup is I have 3 EMG Muscle Sensors (from Advancer Technologies) are connected to the Arduino at analog pin 1,2, and 3. Arduino is connected to Matlab. I am trying to collect data from these three pins simultaneously and store them in an matrix size 1000x3. My issue is the rate at which Matlab is sampling from the analog pin. It takes about 25 seconds to collect 1000 readings from the 3 pins simultaneously. I know arduino itself samples at a higher rate. Below is my code. How do I alter this to get a sampling rate of about like 1000 samples in 10 seconds ?
ar = arduino('COM3');
ax = zeros(1000,3);
for ai = 1:1000
ax(ai,:) = [ar.analogRead(1) ar.analogRead(2) ar.analogRead(3)];
end
delete(ar);
This is the time taken by the above code (profile viewer):
time calls line
< 0.01 1 3 ax = zeros(1000,3);
4
< 0.01 1 5 for ai = 1:1000
25.07 1000 6 ax(ai,:) = [ar.analogRead(1) ar.analogRead(2) ar.analogRead(3)];
1000 7 end
8
1.24 1 9 delete(ar);
Please let me know if there is something else that I need to clarify.
Thanks :Denter code here
You need to modify the arduino c++ code (.pde file).
In this code you should sample the signal as you prefer (1000 for example) and then transfer the sampled data to matlab using serial.writeln() method.
This will give you a sampling rate of ~3KHz (depending on alot of factors)...
The following very probably explains the result that you are seeing and why you need to do something like what Muhammad's answer suggests. While this reason was implied by his answer it was not spelt out so that others can avoid the 'trap'.
I do not have access to the underlying code and systems needed to check this answer with certainty. This answer is based on "typical methods" and has a modest chance of being sheer poppycock [tm], but the exact fit between observation and standard methods suggests this is what is happening. A very little delving by someone with the requisite system to hand will demonstrate if this is correct.
When data is sent one data sample at a time you incur a per-sample overhead significantly in excess of the time taken to just transfer the raw data.
You say it takes 25 seconds to transfer 3000 samples.
The time per sample = 25/3000 = 8.333 ms per sample.
Assume a 9600 baud data transfer rate.
The default communications speed is liable to but 9600 baud. This can be checked but the result suggests that this may be correct and making slightly different assumptions provides an equally good explanation.
Serial coms usually uses N81 format = 1 start bit, 8 data bits, 1 stop bit per 8 bit byte.
So 1 bit takes 1/9600 s
and 10 bits take 10/9600 = 1.042 mS
And sample time / byte time
= 8.333 / 1.042 = 7.997 word times.
In fact if you do the calculations without rounding or truncation, ie
25 / 3000 x 9600/10 = 8.000.... .
ie your transfer is taking EXACTLY 8 x 9600 baud word times per sample.
Equally, this is exactly 4 x 4800 baud or 2 x 2400 baud transfer times.
I have not examined the format used but imagine that to work with the PC monitor program the basic serial routine may use
2 x data bytes + CR + LF = 4 bytes.
That assumes a 16 bit variable sent as 2 x 8 bit binary words.
More likely = either
- 16 bits sent as 4 x ASCII characters or
- 24 bits sent as 6 x ASCII characters.
In the absence of suitably deep delving, the use of 6 ASCII words and a CR + LF at 9600 baud provides such a good fit using typical parameters that Occam probably opines that this is the best starting point. Regardless of whether the total requirement is 8 or 4 or 2 bytes, the somewhat serendipitous exact match between your observed data rate and standard baud rates suggests that this provides the basic reason for what you see.
Looking at the code will rapidly show what baud rate, data length and packing is used.
Related
Background Information
I am trying to make sure I will be able to run two ADXL345 Accelerometers on the same I2C Bus.
To my understanding, the bus can transmit up to 400k bits/s on fast mode.
In order to send 1 byte of data, there are 20 extra bits of overhead.
There are 6 bytes per accelerometer reading (XLow, XHigh, YLow, YHigh, ZLow, ZHigh)
I need to do 1000 readings per second with both accelerometers
Thus,
My total data used per second is 336k bits/s which is within my limit of 400k bits/s.
I am not sure if I am doing these calculations correctly.
Question:
How much data am I transmitting per second with two accelerometers reading 1000 times per second on i2c?
Your math seems to be a bit off; for this accelerometer (from the datasheet: https://www.sparkfun.com/datasheets/Sensors/Accelerometer/ADXL345.pdf), in order to read the 6 bytes of XYZ sample data, you need to perform a 6-byte burst read of the registers. What this means in terms of data transfer is a write of the register address to the accelerometer (0x31) then a burst read of 6 bytes continuously. Each of these two transfers requires sending first the I2C device address and the R/W bit, as well as an ACK/NAK per byte, including the address bytes, as well as START/REPEAT START/STOP conditions. So, over all, an individual transfer to get a single sample (ie, a single XYZ acceleration vector) is as follows:
Start (*) | Device Address: 0x1D (7) | Write: 0 (1) | ACK (1) | Register Address: 0x31 (8) | ACK (1) | Repeat Start (*) | Device Address: 0x1D (7) | Read: 1 (1) | ACK (1) | DATA0 (8) | ACK(1) | DATA1 (8) | ACK (1) | ... | DATA5 (8) | NAK (1) | Stop (*)
If we add all that up, we get 81+3 bits of data that need to be transmitted. Note first that the START, REPEAT START and STOP might not actually take a bits worth of time each but for simplicity we can assume they do. Note also that while the device address is only 7 bits, you always need to postpend the READ/WRITE bit, so an I2C transaction is always 8 bits + ACK/NAK, so 9 bits in total. Note also, the I2C max transfer rate really defines the max SCK speed the device can handle, so in fast mode, the SCK is at most 400KHz (thus 400Kbps at most, but because of the protocol, you'll get less in real data). Thus, 84 bits at 400KHz means that we can transfer a sample in 0.21 ms or ~4700 samples/sec assuming no gaps or breaks in transmission.
Since you need to read 2 samples every 1ms (2 accelerometers, so 84 bits * 2 = 164 bits/sample or 164Kbps at 1KHz sampling rate), this should at least be possible for fast mode I2C. However, you will need to be careful that you are taking full use of the I2C controller. Depending on the software layer you are working on, it might be difficult to issue I2C burst reads fast enough (ie, 2 burst read transactions within 1ms). Using the FIFO on the accelerometer would significantly help the latency requirement, meaning instead of having 1ms to issue two burst reads, you can delay up to 32ms to issue 64 burst reads (since you have 2 accelerometers); but since you need to issue a new burst read to read the next sample, you'll have to be careful about the delay introduced by software between calls to whatever API youre using to perform the I2C transactions.
I'm trying to do target recognition using the target acoustic signal. I tested my code in matlab, however, i'm trying to simulate that in C to test it in tinyOS using sensor simulator.
In matlab, i used wav records (16 bits per sample, 44.1 sample rate), so for example, i have a record for a certain object, lets say cat sound which of 0:01 duration, in matlab that will give me a total of 36864 samples of type int16 ,and size 73728 bytes.
In sensor, if i have [Mica2 sensor: 10 bits ADC (but i'll use 8 bits ADC), 8 MHz microprocessor, and 4 Kb RAM. This means that when i detect an object, i'll fill the buffer with 4000 samples of type uint8_t (if i used 8 KHz sample rate and 8 bits ADC).
So, my question is that:
In matlab i used a large number of samples to represent the target audio signal(36864 samples), but in the sensor i'm limited to only 4000 samples, would that be enough to record the whole target sound?
Thank you very much, highly appreciate your advice
I'am preparing for a competitive exam and i have an operating system question.
I'am not getting how to solve it. please help me out.
Q-)
A program took 160 seconds to execute on a single processor but only 64 seconds on a
4 core multicore. What is the best estimate for the execution time on a 64 core machine?
I don't think this is strictly relevant to programming (you might find this more relevant on the Math StackExchange but I'll attempt to answer it anyway.
The answer will depend entirely on how you model execution time vs number of cores. You could model the execution time as inversely proportional to the number of cores. For example, I used the following model:
Where t is time in seconds and n is number of cores, c (could represent overhead) and k (a factor) are constants.
Solve simultaneously
to get k = 128 and c = 32.
Then just substitute n = 64
So, you get 34 seconds according to this model. Of course, since you don't know the exact model, this can only be a calculated guess.
I was getting a little confused with the representation of different units of bytes.
It is accepted throughout that 1 byte = 8 bits.
However, in a lot of sources I have seen that
1 kiloByte = 2^10 bytes = 1024 bytes
AND
1 kiloByte = 1000 bytes
Doesn't this contradict as in both cases it is stated that 1 byte is 8 bits...?
Different sources claim different reasons for these different representations, thus I am not sure what the most important/real reason is for this rather confusing difference in representation.
Can someone please explain and clarify?
It is accepted throughout that 1 byte = 8 bits
However, in a lot of sources I have seen that
1 kiloByte = 2^ 10 bytes = 1024 bytes
AND
1 kiloByte = 1000 bytes
To make sure we're all clear, your question is "Is a kilobyte equal to 1024 bytes or 1000 bytes?".
Doesn't this contradict as in both cases it is stated that 1 byte is 8 bits...?
This is irrelevant to the question.
So, let's begin. In SI (metric), the multiplier of 1000 is called kilo, abbreviated k. k always means 1000, never anything else.
When binary computers entered the world, we noticed that 2 to the power of 10 is 1024, which is conveniently close to 1000. Computer engineers decided to abuse this coincidence and say that kilo means 1024. By extension, they say that mega means 10242 (instead of the proper definition of 10002), and so on with giga, tera, etc.
While the difference between 1000 and 1024 is small for many purposes, there are times when exact answers are required, and this is where the abusive terminology hurts everyone. Only after decades after kilo=1024 got established did anyone really try to fix the problem. The IEC proposed new prefixes for the binary multipliers: 1024 = kibi, 10242 = mebi, 10243 = gibi, etc.
In summary, the notion that kilo=1024 is an abusive deviation from the consistent SI definition of kilo=1000. While kilo=1024 is popular in the computer industry, it is nevertheless wrong and should be replaced by kibi=1024. Or numbers need to be recomputed to reflect the true definition of kilo/mega/etc. (For example, "512 MB" of RAM is actually about 536.9 MB.)
Btw, don't use random capitalization; it's spelled kilobyte, not kiloByte.
References and links:
http://physics.nist.gov/cuu/Units/binary.html
http://en.wikipedia.org/wiki/Kilo-
http://en.wikipedia.org/wiki/Kilobyte
http://en.wikipedia.org/wiki/Kibibyte
http://xkcd.com/394/
When you talk about data information in computer science, you always have to calculate the result by a power of two. See what wikipedia says:
"In computing, a binary prefix is a
specifier or mnemonic that is
prepended to the units of digital
information, the bit and the byte, to
indicate multiplication by a power of
2. In practice the powers used are multiples of 10, so the prefixes
denote powers of 1024 = 2^10."
Sometimes people use to round it as you have mentioned, but it is a bad use of it.
I don't see what the byte to bits has to do with anything if you are asking whether 1 kiloByte is equal to 1024 or 1000 bytes. These measurements are not set in stone and are not really controlled at all. Computer makers can (and have) used the 1000 conversion to make it look like they have more memory.
The problem comes up when thinking about binary (base 2) or base 10. Base 10 you would use 1000, base 2, 1024.
Is the 127 note values in MIDI musically significant (certain number of octaves or something)? or was it set at 127 due to the binary file format, IE for the purposes of computing?
In the MIDI protocol there are status bytes (think commands, such as note-on or note-off) and there are data bytes (think parameters, such as pitch value and velocity). The way to determine the difference between them is by the first bit. If that first bit is 1, then it is a status byte. If the first bit is 0, then it is a data byte. This leaves only 7 bits available for the rest of the status or data byte value.
So to answer your question in short, this has more to do with the protocol specification, but it just so happens to nicely line up to good number of available pitch values.
Now, these pitch values do not correspond to specific pitches. Yes it is true that typically a pitch value of 60 will give you C4, or middle C. Most synths work this way, but certainly not all. It isn't even a requirement that the synth uses the pitch value for pitches! MIDI doesn't care... it is just a protocol. You may be wondering how alternate tunings work... they work just fine. It is up to the synthesizer to produce the correct pitches for these alternate tunings. MIDI simply provides for a selection of 128 different values to be sent.
Also, if you are wondering why it is so important for that first bit to signify what the data is... There are system realtime messages that can be interjected in the middle of some other command. These are things like the timing clock which is often used to sync up LFOs among other things.
You can read more about the types of MIDI messages here: http://www.midi.org/techspecs/midimessages.php
127 = 27 - 1
It's the maximum positive value of an 8-bit signed integer, and so is a meaningful limit in file formats--it's the highest value you can store in a byte (on most systems) without making it unsigned.
I think what you are missing is that MIDI was created in the early 1980's, not to run on personal computers, but to run on musical instruments with extremely limited processing and storage capabilities. Storing 127 values seemed GIANT back then, especially when the largest keyboard typically has only 88 keys, and most electronic instruments only had 48. If you think MIDI is doing something in a strange way, it is likely that stems from its jurassic heritage.
Yes it is true that typically a pitch value of 60 will give you C4,
or middle C. Most synths work this way, but certainly not all.
Yes ... there has always been a disagreement about where middle C is in MIDI. On Yamaha keyboards it is C3, on Roland keyboards it is C4. Yamaha did it one way and Roland did it another.
Now, these pitch values do not correspond to specific pitches.
Not originally. However, in the "General MIDI" standard, A = 440, which is standard tuning. General MIDI also describes which patch is a piano, which is a guitar, and so on, so that MIDI files become portable across multitimbral sound sources.
Simple efficiency.
As a serial protocol MIDI was designed around simple serial chips of the time which would take 8 data bits in and transmit them as a stream out of one separate serial data pin at a proscribed rate. In the MIDI world this was 31,250 Hz. It added stop and start bits so all data could travel over one wire.
It was designed to be cheap and simple and the simplicity was extended into the data format.
The most significant bit of the 8 data bits was used to signal if the data byte was a command or data. So-
To send Middle C note ON on channel 1 at a velocity of 56 A command bytes is sent first
and the command for Note on was the upper 4 bits of that command bit 1001. Notice the 1 in the Most significant bit, this was followed by the channel ID for channel 1 0000 ( computers preferring to start counting from 0)
10010000 or 128 + 16 = 144
This was followed by the actual Note data
72 for Middle C or 01001000
and then the velocity data again specified in the range 0 -127 with a 0 MSB
56 in our case
00111000
So what would go down the wire (ignoring stop start & sync bits was)
144, 72, 56
For the almost brain dead microcomputers of the time in electronic keyboards the ability to separate command from data by simply looking at the first bit was a godsend.
As has been stated 127 bits covers pretty much any western keyboard you care to mention. So made perfectly logical sense and the protocols survival long after many serial protocols have disappeared into obscurity is a great compliment to http://en.wikipedia.org/wiki/Dave_Smith_(engineer) Dave Smith of Sequential Circuits who started the discussions with other manufacturers to set all this in place.
Modern music and composition would be considerably different without him and them.
Enjoy!
127 is enough to cover all piano keys
0 ~ 127 fits nicely for ADC conversions.
Many MIDI hardware devices rely on performing Analog to Digital conversions (ADC). Considering MIDI is a real time communication protocol, when performing an ADC conversion using successive-approximation (a commonly used algorithm), a good rule of thumb is to use 8 bit resolution for fast computation. This will yield values in the 0 ~ 1023 range, which can be converted to MIDI range by dividing by 8.