Hey guys i've been working on this like 72 hours straight and i can't find the error, i'm working on a PIC16F1719 i'm trying to set 3 peripherials an ADC a I2C Protocol and a USART for comunicating to a BT however the ADC was easy, but i'm having a rough time with the I2C despite the fact i've check the code several times, for some reason when i get the ACK's everything seems OK, but when i go for a lecture on the sensor (MPU6050) nothing shows up but the value i putted last time on the buffer, any ideas why this is happening? It's like the buffer doesn't clear itself and i think i can´t clear it through software, thanks.
i2c slave has the ability to lock the bus if the master does not communicate correctly with it (several possible scenarios...)
This is electirically possible since the 2 wires are wired-and, that means if any slave pulls the clock (for example) down, and keeps it that way, the bus is locked.
Always check first the values on both wires (using scope or dvm), if '0' it means bus locked.
Next test the status register of your i2c controller, it may show arbitration error or something of that sort.
If any of the errors, read the i2c slave part datasheet carefully to check what types of protocol read/write it expects and fix your code.
Related
We have implemented our custom driver that uses DMA to copy a large amount of data from the FMC interface (an FPGA mapped to it) to the RAM using the STM32 mdma engine with 32 dma channels. The FPGA contains a small FIFO we want to copy the data from.
For very fast data acquisition the setup time for new DMA transactions becomes critical!
The first implementation used a workqueue to create the next DMA transaction. It could not be done directly from the "dma_completed" atomic context though some necessary IO that has to wait. This lead to pauses between DMA transaction up to 5ms and buffer overflows in the FPGAs FIFO.
As I am copying from a memory mapped region to RAM, I am using dmaengine_prep_dma_memcpy.
I implemented a number of improvements that reduced the pause betweens DMAs:
I am fusing dma mapped pages so that less dma transaction entries have to be created so less dma engine programming is necessary.
I am preparing the next dma pages upfront. So the next DMA transaction can be directly started from the "dma_completed" routine.
I am using a second dma channel and toggle between them when dma_completed is called. This allows to setup a second DMA with the first one still running. Though linux dma api allows this with one channel, the MDMA engine does not and ignores the added transactions.
Usually the pause is now lower than 1ms. But there a spikes were the FIFO nearly overflowing.
Finally I tried to use dmaengine_prep_dma_cyclic. This would be perfect. A continuously running DMA with no need for a setup time between interrupts.
But this does not work. Or better: I do not get it to work...
The transaction created with dmaengine_prep_dma_cyclic does not want to start!
I am getting a new dma_cookie and any status request to the channel returns "DMA_IN_PROGRESS". It never completes and the completetion callback is also never called.
Though dmaengine_prep_dma_memcpy works fine...
I think this is because of the difference between software vs hardware triggered DMA transactions.
Looking into stm32-mdma.c is see that dmaengine_prep_dma_memcpy has its own setup routine whereas dmaengine_prep_dma_cyclic use stm32_mdma_set_xfer_param() that always configures a HW request.
My very big big questions:
Is there a way to use dmaengine_prep_dma_cyclic for a MEMORY to MEMORY DMA transaction (software triggered)? This would be the perfect solution to my performance problem...
Are we missing some signals to connect the FPGA to the SOC? My FPGA programming collegue suspects some missing TSEL (trigger selection) setting. He suspects dmaengine_prep_dma_cyclic will work then.
If a minimum driver module source code example would help in getting better answers, I can provide one in short time. Please note that this is highly hardware specific. Other SOCs than STM32MP157F may have different behaviour.
Thanks for every feedback!
Bye Gunther
References:
https://wiki.st.com/stm32mpu/wiki/Dmaengine_overview
https://github.com/STMicroelectronics/linux/blob/v5.15-stm32mp/drivers/dma/stm32-mdma.c
I am trying to use CANopenNode into a STM32L476 device by using libohiboard as HAL library. In the network, I have: (i) my board that operates as a master and (ii) a commercial node. At startup, the node sends HB message and SYNC message. When my board use
CO_NMT_sendCommand(CO->NMT,CO_NMT_ENTER_OPERATIONAL, 0x0A);
the master starts to send continually the same message without stopping!
With logic analyzer I see this:
Where Channel 0 is the TX pins of the microcontroller, and Channel 1 is the RX pin.
I can't understand why the message returns into RX pin immediately! I checked the microcontroller configuration and the loopback mode is OFF.
Thanks
Looks like normal CAN operation - all messages are immediately echoed back while they are sent or else bus arbitration wouldn't work. The only difference is the ACK bit which you can see are set on the rx line but not on tx. This bit is filled in by the other CAN node on the bus.
The reason why your node keeps sending the same message doesn't seem related to this.
I don't know how it works on your controller but usually you have to pay attention to send NMT_start_command only when your slave node doesn't return any heartbeat or if the heartbeat value is different than the mode expected (pre operational or operational as an example)
If the slave doesn't return anything there might be multiple reasons:
nothing activated so you have first to set a time using the right SDO
the slave use nodeguarding instead of heartbeat so you have to query first the slave with a message ID: 0x700 + Node ID, DLC: 0
Please let me know if it is not clear or doesn't help
I have a Nucleo-F446RE, and I'm trying to get the I2C working with an IMU I have (LSM6DS33). I am using STM32CubeMX and checked out all the example code for my board which is related to I2C. Specifically I'll be talking about their 'I2C_TwoBoards_ComIT' example, but all their examples which use the interrupt method have this same quirk. Here is a snipped of their code from main.c:
/* The board sends the message and expects to receive it back */
do
{
/*##-2- Start the transmission process #####################################*/
/* While the I2C in reception process, user can transmit data through
"aTxBuffer" buffer */
if(HAL_I2C_Master_Transmit_IT(&I2cHandle, (uint16_t)I2C_ADDRESS, (uint8_t*)aTxBuffer, TXBUFFERSIZE)!= HAL_OK)
{
/* Error_Handler() function is called in case of error. */
Error_Handler();
}
/*##-3- Wait for the end of the transfer ###################################*/
/* Before starting a new communication transfer, you need to check the current
state of the peripheral; if it’s busy you need to wait for the end of current
transfer before starting a new one.
For simplicity reasons, this example is just waiting till the end of the
transfer, but application may perform other tasks while transfer operation
is ongoing. */
while (HAL_I2C_GetState(&I2cHandle) != HAL_I2C_STATE_READY)
{
}
/* When Acknowledge failure occurs (Slave don't acknowledge its address)
Master restarts communication */
}
while(HAL_I2C_GetError(&I2cHandle) == HAL_I2C_ERROR_AF);
Under comment ##-3- they explain that unless we wait for the I2C state to be ready again, after sending a command, the next command will overwrite the previous one, so they use a while loop which waits for the I2C state to be 'ready' before continuing.
Isn't this a very inefficient way to use an interrupt, and no different from using the standard polling method? Both block the main code, so what's the purpose of the interrupt?
In my personal example, I want to collect the accelerometer/gyroscope data at the 1.66 kHz rate which the IMU is capable of. I use a 2kHz timer to send an I2C command to read the acc/gyr data-ready register, and if the data is ready for either sensor I read their 6 bytes to get the x/y/z plane information. Using the polling method is too slow as blocking the code at a rate of 2kHz is not inefficient, but the interrupt method doesn't seem to be any faster as I still need to hang the system during the aforementioned while loop to check if I2C is ready for another command. What am I missing here?
Is this (the example you provided) an efficient way of doing things? No. Can blocking part be avoided? Yes. It's only a small example, a proof of concept, so there is some blocking in there. You should look deeper at why it is there and how can you implement what it does without blocking.
The point of that blocking part is to not start an I2C communication while another I2C communication is in progress. The problem is that while your line of code to send something over I2C has already been executed, the data is still being physically sent over the line, just because your MCU is much faster than I2C. You need to wait until I2C line is idle and available for transmission.
How to achieve that with interrupts and not waste cycles and processing time? Given in your case you can easily estimate the amount of data per each transmission, there is no probem to estimate how much time every transmission will take given your I2C speed. Since you're smartly and correctly using timer to schedule regular transmissions, you should be able to set the timer in such a way that by the next timer interrupt, which will send data, your previous communication has already ended.
For example, if you set the timer to 1Hz to start transmission, you can obviously be sure that by the next interrupt all the communication has happened. You don't need to poll anything at all.
I don't see much point in I2C-polling the IC at 2kHz if it produces data at 1.6kHz. You will have uneven time periods between samples, some data will be very fresh, while some data will come with little delay, plus there will be communication without data ready. It would be better to poll it at something like 1.5-1.6kHz and just expect data to always be there. Of course, given the communication fits into 1.5kHz period, which requires some napkin math.
i have 8 external DACs MCP4728 which i'm communicating with using i2c.
The data is coming directly from a USB cable (16-17ms) and i need to update/write those values as soon as i can.
Right now i'm writing to the i2c inside the USB callback function.
Normally i see code (not DAC related) where a flag is set and then in the main loop, since the flag to update is true, something is perfomed (in this case the DACs should be set).
Can the implementation be the source of those spikes (which tipically are present only with high frequency values) ?
Can bit-skew be the problem with 17ms of updating time ?
I have been programming a little AHCI driver for two weeks. I have read this article and Intel's Serial ATA Advanced Host Controller Interface (AHCI) 1.3. There is an example, which shows how to read sectors via DMA mode (osdev.org). I have done this operation (ATA_CMD_READ_DMA 0xC8) successfully, but when i tried to write sectors (ATA_CMD_WRITE_DMA 0xCA) to the device, the HBA set the error
Offset 30h: PxSERR – Port x Serial ATA Error - Handshake Error
(this is decoding from Intel AHCI specification). I don't understand why it happened. Please, help me.
In addition, I have tried to issue the command IDENTIFY 0xEC, but not successfully...
You asked this question nearly two months ago so I'm not sure if you've already figured this out. Please note that I'm writing from memory in terms of what must be done first, etc. I may not have remembered all, or accurately, what must be done. You should reference the AHCI spec for everything. The methods for doing this are as varied as there are programmers that have done this. For this reason, I'm not including code examples.
For starters, ensure that you've set the HBA state machine accordingly. You'll be able to find references for the state machines supported by the HBA in that same SATA spec 1.3. In lieu of this, you should check a few registers.
Please note that all page numbers are given with respect to viewing in Adobe Acrobat and are 8 pages more than numbered in the actual document
From page 24 and 25 of the spec., check GHC.IE and GHC.AE. These two will turn on interrupts and ensure that the HBA is working in AHCI mode. Another, very important register to check, is CAP.SSS (Page 23). If this bit is high, then the HBA Supports Staggered Spin-up. This means that the HBA will not perform any protocol negotiation for any port. Before you do the following, store the value of PxSIG (Page 35 and 36).
To actually spin up the port, you'll need to visit pages 33, 34 and 35 of the spec. These pages cover the PxCMD register. For each port supported by the HBA (check CAP.NP to know how many are there), you'll have to switch high bit PxCMD.SUD. After switching that bit high, you'll want to poll on PxSSTS (Page 36) to check the state of the PHY. You can check CAP.ISS in order to know what speed you can expect to see "come alive" on PxSSTS.
After spinning up the port, check PxSIG (Page 35 & 36). The value should be different than when you started. I don't recall now what you can expect them to become, but they will be different. When communication is actually established, the device sends to the host an initial FIS. Without this first FIS, the HBA will be unable to communicate with the device. (It's with this first FIS that the HBA sets the correct bits in PxSIG.)
Finally, after all of this, you'll need to set PxCMD.FRE (page 34). This bit in the port command register enables FIS delivery to the device. If this bit is low, the HBA will ignore anything you send to it.
As I said in the beginning, I'm not sure if this will answer all of your question but I hope that it does get you on the right track. I'm going from memory on the events that must be done in order to effectively communicate to a SATA device. I may not have remembered in full detail.
I hope this helps you.