I'm reading data from serial port through MATLAB. The serial port is connected by an XBee module.
I have sucessully read the data and can also send the data correctly. Here's the code; it's quite simple:
s = serial('COM4', 'BaudRate', 9600, 'Terminator', 'CR', 'StopBit', 1, 'Parity', 'None');
fopen(s);
while(1)
while(s.BytesAvailable==0)
end
fprintf(s,'1');
fscanf(s)
s.BytesAvailable
end
So, as you can see at the first stage of the main loop, I'm waiting until data is available in the input buffer. Once the code detects data, immediately a character is sent. However the execution is not fast as I expected. Using an oscilloscope, probes on Xbee DIN and DOUT, I measure 34 ms between when the data is sent to the PC and when the data comes from the PC.
For my application, 34 ms is a critical time.
How can I fix this?
Why do you use USRT (universal serial receiver/transmitter) at 9600 Baud rate?
Your USRT setup, "'BaudRate', 9600, 'StopBit', 1" means one byte (8 bit) of data is transferred by 10 bits (1 start bit, 8 data bit and 1 stop bit) on the wire whose speed is 9600 bit per second, so 960 bytes per second is the maximum speed of data.
It is about one byte per ms (millisecond).
XBee use 5 byte for the header, so its header overhead is about 5 ms.
34 ms is not so bad if you use 25-28 bytes of data. If you use only few bytes of data, you may have another problem.
To improve on this problem, you should use a higher rate for USRT.
If you use the following setup, transmission of your data may be achieved within 3 ms - 4 ms.
s = serial('COM4', 'BaudRate', 115200, 'Terminator', 'CR', 'StopBit', 1, 'Parity', 'None');
I would just change the Baud rate from 9600 to 115200 in your code.
Related
I have a question about the UDP protocol.
I want to stream data from a particle sensor (Xilinx FPGA Dev Board) to a raspberry pi (or Windows 10 laptop) via UDP byte socket as binary data. I don't care if some packets are lost, because there are many particles coming after anyways... But if a packet gets lost, all of the particles information should get lost.
The connection is a short lan cable over the 1 Gbit/s Ethernet port.
The "minimum" amount of data is 192 Bit (24 Byte) per particle (16 x 12 Bit value) and the maximum amount of particles is 3300 per second.
So I have to transfer max. 192 x 3300 = 79200 Byte/s plus Header etc.
Maximum packet size of UDP 65.507 Byte
As I understand the packet size has to be devidable by 24 Byte in my application.
Which leaves me with a packet size range of 24 to 65.496 Byte.
But if the concentration is lower I don't want to wait minutes until a packet is filled and ready to be send.
What would you guys suggust in regard of repitition rate and size?
E.g. a 1008 Byte packet has to be send about every 13 ms at max. particle concentration.
best regards
I just tested a UDP socket with 1024 Byte Buffer, which works nice with strings or Bitvectors for single test packets.
My APB1 clock is reported by the STM32 library as being 36MHz.
I used a website to calculate a prescaler value of 3 (4 with the automatic +1), BS1 of CAN_BS1_15tq and BS2 of CAN_BS2_2tq. When I use the values in a quick spreadsheet calculation they come out right for a 500 Kbit/s baud rate.
I used different values, but assuming the same clock speed of 36 MHz to talk at 250 Kbit/s baud rate to NMEA 2000 devices successfully. When I run my code at 250 Kbit/s it works correctly and talks to my test board (which is using the same code) successfully.
I wondered if the TX and RX pin GPIO speed mattered. Here is my configuration for those pins:
gpio_init_data.GPIO_Speed = GPIO_Speed_10MHz;
gpio_init_data.GPIO_Pin = CAN1_RX;
gpio_init_data.GPIO_Mode = GPIO_Mode_IPU;
GPIO_Init(CAN1_PIN_GROUP, &gpio_init_data);
gpio_init_data.GPIO_Pin = CAN1_TX;
gpio_init_data.GPIO_Mode = GPIO_Mode_AF_PP;
GPIO_Init(CAN1_PIN_GROUP, &gpio_init_data);
When I run at 500 KBit/s baud rate I get all transmissions failing and arbitration lost flagged: TSR=41000004. This happens even with the RX and TX pins at GPIO speed 50 MHz.
The CAN transceiver is an ISO1050 which, according to the data sheet, can handle up to 1 Mbit/s.
Does anyone have any idea what I could be doing wrong? Could it be a problem in the circuitry?
As Lundin said, "CAN transceivers need an ideal impedance of 60 ohm to work properly."
The system I am using is a test rig with a board to be tested connected to a test board by about 8cm of CAN bus cable pair. Up to speeds of 250 Kbits this works perfectly well, but not at 500 Kbits.
Adding a 56 ohm resistor (2 x 120 ohm may be better) solves the problem.
Many thanks to Lundin for his patience and excellent information.
Background Information
I am trying to make sure I will be able to run two ADXL345 Accelerometers on the same I2C Bus.
To my understanding, the bus can transmit up to 400k bits/s on fast mode.
In order to send 1 byte of data, there are 20 extra bits of overhead.
There are 6 bytes per accelerometer reading (XLow, XHigh, YLow, YHigh, ZLow, ZHigh)
I need to do 1000 readings per second with both accelerometers
Thus,
My total data used per second is 336k bits/s which is within my limit of 400k bits/s.
I am not sure if I am doing these calculations correctly.
Question:
How much data am I transmitting per second with two accelerometers reading 1000 times per second on i2c?
Your math seems to be a bit off; for this accelerometer (from the datasheet: https://www.sparkfun.com/datasheets/Sensors/Accelerometer/ADXL345.pdf), in order to read the 6 bytes of XYZ sample data, you need to perform a 6-byte burst read of the registers. What this means in terms of data transfer is a write of the register address to the accelerometer (0x31) then a burst read of 6 bytes continuously. Each of these two transfers requires sending first the I2C device address and the R/W bit, as well as an ACK/NAK per byte, including the address bytes, as well as START/REPEAT START/STOP conditions. So, over all, an individual transfer to get a single sample (ie, a single XYZ acceleration vector) is as follows:
Start (*) | Device Address: 0x1D (7) | Write: 0 (1) | ACK (1) | Register Address: 0x31 (8) | ACK (1) | Repeat Start (*) | Device Address: 0x1D (7) | Read: 1 (1) | ACK (1) | DATA0 (8) | ACK(1) | DATA1 (8) | ACK (1) | ... | DATA5 (8) | NAK (1) | Stop (*)
If we add all that up, we get 81+3 bits of data that need to be transmitted. Note first that the START, REPEAT START and STOP might not actually take a bits worth of time each but for simplicity we can assume they do. Note also that while the device address is only 7 bits, you always need to postpend the READ/WRITE bit, so an I2C transaction is always 8 bits + ACK/NAK, so 9 bits in total. Note also, the I2C max transfer rate really defines the max SCK speed the device can handle, so in fast mode, the SCK is at most 400KHz (thus 400Kbps at most, but because of the protocol, you'll get less in real data). Thus, 84 bits at 400KHz means that we can transfer a sample in 0.21 ms or ~4700 samples/sec assuming no gaps or breaks in transmission.
Since you need to read 2 samples every 1ms (2 accelerometers, so 84 bits * 2 = 164 bits/sample or 164Kbps at 1KHz sampling rate), this should at least be possible for fast mode I2C. However, you will need to be careful that you are taking full use of the I2C controller. Depending on the software layer you are working on, it might be difficult to issue I2C burst reads fast enough (ie, 2 burst read transactions within 1ms). Using the FIFO on the accelerometer would significantly help the latency requirement, meaning instead of having 1ms to issue two burst reads, you can delay up to 32ms to issue 64 burst reads (since you have 2 accelerometers); but since you need to issue a new burst read to read the next sample, you'll have to be careful about the delay introduced by software between calls to whatever API youre using to perform the I2C transactions.
I am making a wireless device to measure a magnetic field based on the HMC5983 magneto-resistive sensor and an ESP8266 (NodeMCU ESP-12e module).
The sensor is connected to the ESP8266 on the I2C interface. The
ESP8266 polls the sensor and sends this to a data collector (Raspberry Pi).
It is extremely important to me to achieve the greatest possible number of computation in a second, as quality of the obtained data for later processing depends on it.
HMC5983 supports the I2C interface in Standard, Fast and High-speed modes. But the NodeMCU I2C Module only supports i2c.SLOW speed.
common I²C bus speeds are the 100 kbit/s standard mode and the 10
kbit/s low-speed mode https://en.wikipedia.org/wiki/I%C2%B2C
Then I connected the HMC5983 directly to the Raspberry Pi via I2C. I could achieve about 500 measurements per second (by monitoring the DRDY interrupt pin) in single-measurement mode and 200 measurements per second in continous-measurement mode (with Data Output Rate at 220 Hz - all right).
The programm was written in Python, here is the code:
#!/usr/bin/python
import smbus #for i2c use
import time
import os
bus = smbus.SMBus(1) #use i2c port 1 on Rasspberry Pi
addr = 0x1e #address HMC5983 0x1E
bus.write_byte_data(addr,0x00,0b00011100) #Write to CRA Speed 220Hz
bus.write_byte_data(addr,0x01,0b00100000) #Write to CRB Gain 660 +-2.5Ga 1.52mG/Lsb
print "Start measuring.....
while True: #if we need infinity cycle
bus.write_byte_data(addr,0x02,0b00000001) #Write to Mode single-measurement mode
while bus.read_byte_data(addr,0x09) == 0b11: #Wait RDY in Status Register
()
#DATA READY
data = bus.read_i2c_block_data(addr,0x03,6)#Take data from data registers
#convert three 16-bit 2`s compliment hex value to dec values and assign x,y,z
x = data[0]*256+data[1]
if x > 32767 :
x -= 65536
y = data[2]*256+data[3]
if y > 32767 :
y -= 65536
z = data[4]*256+data[5]
if z > 32767 :
z -= 65536
print "X=",x, "\tY=",y, "\tZ=",z
When I connected the HMC5983 to the ESP8266, I could achieve only about 140 computations a second in single-computation mode.
----------THIS IS FOR SINGLE-MEASUREMENT MODE-------------
--init i2c
function H_init(sda,scl)
i2c.setup(id, sda, scl, i2c.SLOW)
print("I2C started...")
end
-- reads 6byte from the sensor
function read_axis()
i2c.start(id)
i2c.address(id, dev_addr, i2c.RECEIVER)
data = i2c.read(id, 6)
i2c.stop(id)
return data
end
--set register
function set_reg(reg_addr,val)
i2c.start(id)
i2c.address(id, dev_addr, i2c.TRANSMITTER)
i2c.write(id,reg_addr)
i2c.write(id,val)
i2c.stop(id)
end
--------GPIO INITILIZATION-------
drdyn_pin=3
gpio.mode(drdyn_pin, gpio.INPUT)
-------I2C INITILIZATION-------
id = 0
i2c = i2c
local i=0
dev_addr = 0x1e
H_init(1,2)
set_reg(0x00,0x1c) --set speed 220Hz
set_reg(0x01,0x20) --set gain
print("Start measurement...")
while true do
set_reg(0x02,0x01) --single-measurement mode
while(gpio.read(drdyn_pin) == 1) do
end
data = read_axis()
tmr.wdclr()
end
After that I configured the sensor to continous-measurement mode and received the same 200 measurements per second.
Is operation of the I2C interface in NodeMCU at high speeds possible? Can somebody tell me how to try to accelerate sensor polling?
Of course it is possible, ESP8266 is faster than Pentium :-) Just a few thousands or even just a few ten thousands measurements per second would be really disappointing for such tremendous processing power. Here you are the link to ESP8266 I2C library written in assembly and tested with Arduino toolchain. That way you can communicate at the rate of 800000 messages per second #80 MHz or one million messages per second #160 MHz. I believe that would be more than enough for the project you have described, at 80 kHz I2C speed you can have a few ten thousands measurements per second - if a slave device can handle such speed.
For any future doubts if something could or couldn't be done with ESP8266, I'd say this is more than enough to get a picture - and in this case I mean it literally :-)
I have been controlling Arduino from Matlab using ArduinoIO-Matlab interface. My current setup is I have 3 EMG Muscle Sensors (from Advancer Technologies) are connected to the Arduino at analog pin 1,2, and 3. Arduino is connected to Matlab. I am trying to collect data from these three pins simultaneously and store them in an matrix size 1000x3. My issue is the rate at which Matlab is sampling from the analog pin. It takes about 25 seconds to collect 1000 readings from the 3 pins simultaneously. I know arduino itself samples at a higher rate. Below is my code. How do I alter this to get a sampling rate of about like 1000 samples in 10 seconds ?
ar = arduino('COM3');
ax = zeros(1000,3);
for ai = 1:1000
ax(ai,:) = [ar.analogRead(1) ar.analogRead(2) ar.analogRead(3)];
end
delete(ar);
This is the time taken by the above code (profile viewer):
time calls line
< 0.01 1 3 ax = zeros(1000,3);
4
< 0.01 1 5 for ai = 1:1000
25.07 1000 6 ax(ai,:) = [ar.analogRead(1) ar.analogRead(2) ar.analogRead(3)];
1000 7 end
8
1.24 1 9 delete(ar);
Please let me know if there is something else that I need to clarify.
Thanks :Denter code here
You need to modify the arduino c++ code (.pde file).
In this code you should sample the signal as you prefer (1000 for example) and then transfer the sampled data to matlab using serial.writeln() method.
This will give you a sampling rate of ~3KHz (depending on alot of factors)...
The following very probably explains the result that you are seeing and why you need to do something like what Muhammad's answer suggests. While this reason was implied by his answer it was not spelt out so that others can avoid the 'trap'.
I do not have access to the underlying code and systems needed to check this answer with certainty. This answer is based on "typical methods" and has a modest chance of being sheer poppycock [tm], but the exact fit between observation and standard methods suggests this is what is happening. A very little delving by someone with the requisite system to hand will demonstrate if this is correct.
When data is sent one data sample at a time you incur a per-sample overhead significantly in excess of the time taken to just transfer the raw data.
You say it takes 25 seconds to transfer 3000 samples.
The time per sample = 25/3000 = 8.333 ms per sample.
Assume a 9600 baud data transfer rate.
The default communications speed is liable to but 9600 baud. This can be checked but the result suggests that this may be correct and making slightly different assumptions provides an equally good explanation.
Serial coms usually uses N81 format = 1 start bit, 8 data bits, 1 stop bit per 8 bit byte.
So 1 bit takes 1/9600 s
and 10 bits take 10/9600 = 1.042 mS
And sample time / byte time
= 8.333 / 1.042 = 7.997 word times.
In fact if you do the calculations without rounding or truncation, ie
25 / 3000 x 9600/10 = 8.000.... .
ie your transfer is taking EXACTLY 8 x 9600 baud word times per sample.
Equally, this is exactly 4 x 4800 baud or 2 x 2400 baud transfer times.
I have not examined the format used but imagine that to work with the PC monitor program the basic serial routine may use
2 x data bytes + CR + LF = 4 bytes.
That assumes a 16 bit variable sent as 2 x 8 bit binary words.
More likely = either
- 16 bits sent as 4 x ASCII characters or
- 24 bits sent as 6 x ASCII characters.
In the absence of suitably deep delving, the use of 6 ASCII words and a CR + LF at 9600 baud provides such a good fit using typical parameters that Occam probably opines that this is the best starting point. Regardless of whether the total requirement is 8 or 4 or 2 bytes, the somewhat serendipitous exact match between your observed data rate and standard baud rates suggests that this provides the basic reason for what you see.
Looking at the code will rapidly show what baud rate, data length and packing is used.