I have working C & Python3 code, based upon simple examples from the internet, where I can correctly send data from my Raspberry Pi4 to an Atmel SAM-E70 dev kit board. I've got a logic analyzer connected to look at the data being sent, and for every i2c_write_data_block() from my Python3 code, the smbus2 code sends the 7-bit address, followed by 0x00, followed by the byte stream that I want to send. My C code, sending the same byte streams, doesn't have the 0x00 between the address and the data. Finally, sending the byte stream using i2ctransfer() from the shell also works as expected: no extra byte.
Hypothetically, it could be that the smbus2 package is trying to use a 10-bit address, but I cannot find any documentation supporting this supposition. In fact, what I've found indicates that the I2C bus configuration is performed via config file(s) which would lead me to the believe that the language used to communicate on the I2C bus shouldn't matter - it would have the same configuration.
Has anyone else encountered this?
I noticed the same thing using the smbus2 i2c_write_data_block() function.
To avoid sending the 0x00 start byte, use the i2c_rdwr() function.
Example:
bus = SMBus(1)
address = 4
data = 'Some message'.encode()
msg = i2c_msg.write(address, data)
bus.i2c_rdwr(msg)
Related
For a fast ADC sampling USB device, I am using the USB 2.0 High Speed capable STM32F733 with the embedded USB-HS PHY. In USBView, I can see that the device is enumerated, the libusb code opens the device and claims interface, but when I try to receive data with libusb_bulk_transfer, the operation times out (return code -12). Things I have tried: I have confirmed than when I request data with libusb_bulk_transfer, the device is interrupted. Note: I have DMA enabled in my class configuration C file and it is not clear to me how that is triggered. I have verified that the transfersize and packet count registers are being set correctly by the LL library function, and that when I request data from
Any tips on debugging such problems will be much appreciated - this board is my undergrad thesis due in under two months!
Desktop sequence:
libusb_get_device_list, libusb_get_device_descriptor, libusb_open, libusb_get_string_descriptor_ascii, libusb_free_device_list, libusb_bulk_transfer(devh, fat_EPIN_ADDR, inframe, fat_EPIN_SIZE, &gotBytes, 100). Where gotBytes is integer, and inframe is a large array.
Device firmware:
MX_USB_DEVICE_Init();
uint8_t txBuffer[10*fat_EPIN_SIZE];
while (1)
{
USBD_LL_Transmit(&hUsbDeviceHS, Custom_fat_EPIN_ADDR, txBuffer, Custom_fat_EPIN_SIZE);
HAL_Delay(1);
}
Custom_fat_EPIN_SIZE is 0x200 and the endpoint address is 0x81 (EP IN 1)
Installed driver for device is WinUSB (verified in Device Manger to be winusb.sys), and I am linking libusb-1.0 into my desktop program. You can find my source code at https://gitlab.com/tywonemi-school-stuff/silicon-radar-fun, the firmware is My SW/v1 and the desktop software is a Qt Creator project in My SW/Viewer, of note is usb.cpp. You can also compare with testing project/HIDTest, which is code that I tested with STM32F303 nucleo dev board where I was able to read an array through IN bulk endpoint with the Viewer application. However, F3 has the USB peripheral, while F7 has OTG_USB, and I am now attempting USB 2.0 compliant HS so there may be more protocol-based pitfalls. You can also find the output of the device descriptor etc from USBView in my SW/USBView_broken.txt
EDIT 1: I have found finally some concrete error in the STM32 behavior. The DMAADDR is set for EPIN 0x81, and never increments, despite the DMA being enabled. I have went through literally every occurrence of the word "DMA" in the USB_OTG periphery.
I thought it might be that my linker script makes my array be stored in DTCM or similar, and the OTG DMA can't access it, but the address of txBuffer is 0x2003EBEC which is in SRAM2. The AHB matrix in the reference manual clearly shows, that the USB OTG HS DMA is master for a bus that SRAM2 is a slave of. And DTCM is connected too. I will look for application notes for USB OTG HS DMA - it just seems to be refusing to copy data!
I have fixed my issue by disabling the DMA setting. I have re-read the relevant portions of the reference manual and still don't know how exactly the values propagate into the Tx FIFOs. It is possible that DMA-less operation will be a major bottleneck in my project, I might return to this later.
Am trying to initialize the NRF24L01+ registers using SPI but they always return 0x00.
According to the datasheet, Table 20 on page 51, all write commands will have a pattern of b001x xxxx, which i understood as having a 0x2x pattern.
In my snapshot below, i send the register value, for example register 0x00 will be sent as 0x20 indicating a write command to that register and then i send the value to be written on that register.
As you see on the MISO line, the value is 0x00 even when am trying to write a 0x08 which should be the default value according to page 57 of the datasheet.
I still dont know why its returning 0x00 even when i independently try to read the contents of that register later on without writing to it. I still get 0x00. The same applies to all other registers that am trying to re-init.
Anyone who has experienced this behaviour elsewhere or is it me that is having something wrong?
The NRF24 am trying to program here is this type from sparkfun
You are close. The datasheet shows write register as 001A AAAA and read as 000A AAAA, where the 5 A's represent the register you want to write to. The spec states that while the command is being sent (read, write, read payload, write payload, flush, activate, and so on), the device will return the status register. In your data, the device is responding with 0x0E, which is correct; decoded is is saying no errors and no data received or pending to transmit. If you want to see if the command you send was accepted, you need to first write the data and then read the data. For example, let's say we want to write the config register to enable the device as a receiver, 2 byte CRC with Rx interrupts enabled.
First, you would send 0b00100000 (0x20), 0b00111111 (0x3F). The device will respond with 0b00001110 (0x0e), 0b00000000 (0x00). This is what you are seeing. If you want to verify the configuration register, you need to then send 0b00000000 (0x00),which is the command to read the config register, then 0b00000000 (0x00), which is a dummy byte to clock out the data. The device will respond with 0x0e, which is the status, and then 0x3F assuming you configured as I did above.
Note that there are more commands than just reading and writing the registers, there are specific commands to fill and read the pipeline data.
Hope this helps.
I'm using AT24C512 EEPROM which is 512KB along with my STM32
I'm able to write 128bytes of data at once using
HAL_I2C_Mem_Write(&_EEPROM24XX_I2C,0xa0,Address,I2C_MEMADD_SIZE_16BIT,(uint8_t*)data,size_of_data,100)
but the issue is that i want to write more data after the data that was just wrote, but the EEPROM will replace the data as the Address is the same
so how can i skip the written address ?
This answer is not about using HAL with I2C, but hope it will point you
Just check datasheet (I looking into STM32F0) and you can see that the limit is 255 bytes (register CR2:NBYTES), I'm not sure if there is another limitation in HAL, but using direct access to registers you can sent 255 bytes at once or fragment it and sent how much you want.
For fragmenting there is bit CR2:RELOAD, if you set this, then at the end will be not transfer stopped, and you can update next NBYTES, .. when you will set last block of bytes (which will fit into NBYTES) then clear bit CR2:RELOAD.
This has one disadvantage, that every 255 bytes, you will be interrupted.
i think you should check the AT24C512 datasheet page 7.
If more
than 128 data words are transmitted to the EEPROM, the
data word address will
“
roll over
”
and previous data will be
overwritten. The address
“
roll over
”
during write is from the
last byte of the current page to the first byte of the same
page.
I would like to read a block of data from my Arduino Mega (and also from an Arduino Micro in another project) with my Raspberry Pi via I2C. The code has to be in Perl because it's sort of a plug-in for my Home-Automation-Server.
I'm using the Device::SMBus interface and the connection works, I'm able to write and read single Bytes. I can even use writeBlockData with register address 0x00. I randomly discovererd that this address works.
But when I want to readBlockData, no register-address seems to work.
Does anyone know the correct register-address, or is that not even the problem that causes errors?
Thanks in advance
First off, which register(s) are you wanting to read? Here's an example using my RPi::I2C software (it should be exceptionally similar with the distribution you're using), along with a sketch that has a bunch of pseudo-registers configured for reading/writing.
First, the Perl code. It reads two bytes (the output of an analogRead() of pin A0 which is set up as register 80), then bit-shifts the two bytes into a 16-bit integer to get the full 0-1023 value of the pin:
use warnings;
use strict;
use RPi::I2C;
my $arduino_addr = 0x04;
my $arduino = RPi::I2C->new($arduino_addr);
my #bytes = $arduino->read_block(2, 80);
my $a0_value = ($bytes[0] << 8) | $bytes[1];
print "$a0_value\n";
Here's a full-blown Arduino sketch you can review that sets up a half dozen or so pseudo-registers, and when each register is specified, the Arduino writes or reads the appropriate data. If no register is specified, it operates on 0x00 register.
The I2C on the Arduino always does an onReceive() call before it does the onRequest() (when using Wire), so I set up a global variable reg to hold the register value, which I populate in the onReceive() interrupt, which is then used in the onRequest() call to send you the data at the pseudo-register you've specified.
The sketch itself doesn't really do anything useful, I just presented it as an example. It's actually part of my automated unit test platform for my RPi::WiringPi distribution.
I have two xbee's series 1. I have them as endpoint devices working in API mode and talking to each other. The first xbee is attached at a raspberry pi, while the other is on my pc where I see the terminal tab of XCTU program. The baud rate I use is 125000.
From raspberry pi I try to send a jpg image which is 30Kbytes. I send data frames 100 byte long (the biggest as it is said in the xbee documentation). Inside a loop I create and send the packets, I have also a cout statement that prints the loop number. Everything is fine and all bytes are sent. When I comment out the cout statement not all bytes are sent.
From what I have understood the cout statement works as a delay between packets, but I still cannot understand why is this happening as it is supposed that I use the half speed ...
I hope I was clear and look forward for a reply.
UPDATE
Just to summarize, i changed baud rate to 250000 where there is the same behavior as in 125000. I also implemented hardware flow control by checking cts signal. When xbees are in transparent mode I need a delay between sending characters at around 150us. The same goes for api mode too. The difference with 125000 baud rate in api mode was that the delay needed, was enough to be betwween each data packet, but in 250000 the delay is needed between each byte that i send. If i do the above everything goes well.
The next thing i did was to plug both xbees in my pc in transparent mode. I went to terminal tab of xctu software where i chose assemble packet and sent at around 3000 bytes to the other xbee. The result was the same. The second xbee received at about 1500 bytes and then each time that i was sending one byte from the first to the second, the "lost bytes" were being received at packets of 1000. :/
So could anyone know what am I doing wrong?
You should connect the /CTS pin from the XBee module into the Raspberry Pi, and have your routine stop sending data when the XBee de-asserts it.
At higher baud rates, it's possible to stream data into the XBee module faster than it can send to the remote module. The local XBee module uses the /CTS pin to notify the host when its buffers are almost full and the host should stop sending. People refer to this as hardware flow control.
It may be necessary to modify the serial driver on the Raspberry Pi to make use of that signal -- it should pause the transmit buffer when de-asserted, and automatically resume sending when re-asserted.