Live capture via V4L2 using external trigger freezes on DQBUF - raspberry-pi

I'm trying to set up a live capture of frames from a USB camera on a Raspberry. I have decided to use Video4Linux to grab the frames. I'm also using an external trigger which I decided to use because of image processing - I need to capture a frame, process it and just then capture a new one - therefor I request and allocate just a single buffer. I control the external trigger by one of RPi's GPIO pins. My capturing loop goes :
ioctl call to queue a buffer
external trigger realized by GPIO
ioctl call to dequeue a buffer
The program acts inconsistently - sometimes it freezes, sometimes it doesn't. Every time it freezes, it gets stuck on the ioctl call to dequeue a buffer. Any ideas what I might be doing wrong? Here is the code of the loop:
while( frames != 50 ) {
// Queue the buffer
if(ioctl(fd, VIDIOC_QBUF, &bufferinfo) < 0){
perror("VIDIOC_QBUF");
exit(1);
}
// external trigger realized by rising edge signal
digitalWrite(0, HIGH);
delay(50);
digitalWrite(0, LOW);
delay(50);
// Dequeue the buffer
if( ioctl(fd, VIDIOC_DQBUF, &bufferinfo) < 0) {
perror("VIDIOC_DQBUF");
exit(1);
}
}

Related

Is there a way to do uart_recieve without using "while" on stm32?

I want to receive data using UART_Receive.
However, if UART_Receive is not included in the while statement, the data will not be received properly.
I don't want to impose restrictions on executing certain events and other code when uart occurs.
Is there any way to get the data when uart occurs at any time?
I am currently using UART_RECEIVE_DMA.
I can suggest you to use UART idle interrupt
void My_UART_IRQHandler(UART_HandleTypeDef *huart)
{
if (__HAL_UART_GET_FLAG(huart, UART_FLAG_IDLE))
{
__HAL_UART_CLEAR_IDLEFLAG(huart);
// data is stored in uartData
}
}
void InitUART(void)
{
__HAL_UART_ENABLE_IT(&huart, UART_IT_IDLE);
HAL_UART_Receive_DMA(&huart, uartData, size);
}
Go to USARTx_IRQHandler in stm32xxxx_it.c and add call My_UART_IRQHandler:
void USART1_IRQHandler(void)
{
/* USER CODE BEGIN USART1_IRQn 0 */
/* USER CODE END USART1_IRQn 0 */
HAL_UART_IRQHandler(&huart1);
/* USER CODE BEGIN USART1_IRQn 1 */
My_UART_IRQHandler(&huart1);
/* USER CODE END USART1_IRQn 1 */
}
Don't forget to enable UART interrupts and use DMA for UART TX
In three ways UARTdata can be received.
polling
Interrupt
DMA
DMA or interrupt UART receive methods can be triggered anytime when the UART signal occurs. So using UART_RECEIVE_DMA and "imposing restrictions on executing certain events and other code when uart occurs" is kind of strange to me.
In the DMA method, you do not need to call the UART to receive on the while() loop. For learning and receiving fixed-length data can try this resource.
If the data length is unknown can use IDLE line detection. Use this resource from controllers tech for more details.
One way is an interrupt-based circular FIFO queue.
Issue a recieve-call for a byte (or more), when the interrupt service routine is called, stuff the byte/s in the FIFO and then process the data in-between.
This allows for data reception to be done quickly, so you can do the other things that seem to be a worry.
If you're worried about servicing the other peipherals, you may want to go with a pre-emptive approach with either a Real-time Operating System, or priority-based interrupts. This will give you control over when exactly things are serviced.

I2C transmit with DMA and HAL not working

This seems to be a problem that is somewhat common, but I have been unsuccessful with any of the solutions I have found online. Specifically I am trying to transmit a 1024 byte buffer (full 128x64 px image) to a SSD1306 display via I2C/DMA and the HAL generated in cubeIDE. I am using a STML432 nucleo board. I have no problem transmitting the buffer without DMA using HAL_I2C_Mem_Write
Based on other questions I have seen, the problem lies in the fact that the DMA finishes while the I2C bus is still working on the transmit. I just don't know how to remedy this and the examples given usually don't use the HAL (unfortunately, despite my efforts I am not quite competent to correctly apply them to the HAL myself I guess). I have tried using the interrupts for I2c and DMA with no luck, only about the first 254 bytes get transferred (just shy of two rows showing on the screen).
Here is my code for sending the buffer:
static void ssd1306_WriteMData_DMA(const uint8_t *data, uint16_t size)
{
while(HAL_I2C_GetState(&hi2c1) != HAL_I2C_STATE_READY);
HAL_I2C_Mem_Write_DMA(&hi2c1, I2C_ADDR, SSD1306_REG_MDAT, 1, (uint8_t*)data, size);
}
and the code for each interrupt handler:
void I2C1_EV_IRQHandler(void)
{
/* USER CODE BEGIN I2C1_EV_IRQn 0 */
if(I2C1->ISR & I2C_ISR_TCR){
I2C1->CR2 |= (I2C_CR2_STOP);// stop i2c
I2C1->ICR |= (I2C_ICR_STOPCF);// Reset the ICR flag.
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
}
/* USER CODE END I2C1_EV_IRQn 0 */
//HAL_I2C_EV_IRQHandler(&hi2c1);
/* USER CODE BEGIN I2C1_EV_IRQn 1 */
/* USER CODE END I2C1_EV_IRQn 1 */
}
void DMA1_Channel6_IRQHandler(void)
{
/* USER CODE BEGIN DMA1_Channel6_IRQn 0 */
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
/* USER CODE END DMA1_Channel6_IRQn 0 */
HAL_DMA_IRQHandler(&hdma_i2c1_tx);
/* USER CODE BEGIN DMA1_Channel6_IRQn 1 */
/* USER CODE END DMA1_Channel6_IRQn 1 */
}
I think that is all the pertinent code, let me know if there is something else I am missing. All of the initialization code for the peripherals was done through cubeMX, but I can post that if need be, or the settings. I feel like it is something really simple that I'm missing, but this is a bit over my head to be honest so I don't quite grasp exactly what's going on...
Thanks for any help!
Problem is in your custom DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler. Those functions will be called right after I2C transfers 255 bytes, which is MAX_NBYTE_SIZE for NBYTES. HAL already have all required interrupt routines inside stm32l4xx_hal_i2c.c:
Sets I2C transfer IRQ handler to I2C_Master_ISR_DMA;
Checks if data size is larger than 255 bytes and uses reload mode.
Sets I2C DMA complete callback to I2C_DMAMasterTransmitCplt;
Starts DMA using HAL_DMA_Start_IT()
Configures I2C registers using I2C_TransferConfig()
HAL driver will handle all I2C+DMA interrupts using I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt:
I2C_DMAMasterTransmitCplt will restart DMA for each chunk of 255 (MAX_NBYTE_SIZE) or less bytes.
I2C_Master_ISR_DMA will reset RELOAD/NBYTES registers using I2C_TransferConfig.
For last block of data I2C_AUTOEND_MODE is used.
So all you need is
remove "user code" from DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler functions
enable I2C1 event interrupt in STM32 Device Configuration Tool
configure DMA with data width byte/byte
perform a single call of HAL_I2C_Mem_Write_DMA(...) to start transfer
check HAL_I2C_STATE_READY before next transfer
See HAL_I2C_Mem_Write_DMA, I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt source code in stm32l4xx_hal_i2c.c to understand how it works.
About why DMA finishes while I2C is still working: HAL driver sends I2C data over DMA using 255 byte chunks, stops DMA, starts DMA, clears I2C_CR2 NBYTES/RELOAD, enables DMA. DMA may be run continuously using DMA_CIRCULAR mode, but currently it is not implemented in HAL I2C drivers. Here is example of using I2C with DMA_CIRCULAR mode:
// DMA enabled single time
hi2c1.hdmatx->XferCpltCallback = MY_I2C_DMAMasterTransmitCplt;
HAL_DMA_Start_IT(hi2c1.hdmatx, (uint32_t)&i2cBuffer, (uint32_t)&hi2c1.Instance->TXDR, I2C_BUFFER_SIZE);
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_GENERATE_START_WRITE); // in first call using I2C_GENERATE_START_WRITE
uint32_t tmpisr = I2C_IT_TCI;
__HAL_I2C_ENABLE_IT(&hi2c1, tmpisr);
hi2c1.Instance->CR1 |= I2C_CR1_TXDMAEN;
Still need to clear I2C_CR2 NBYTES/RELOAD using MY_I2C_TransferConfig each 254 bytes (I do not use 255 to align interrupt firing to even index in array):
static HAL_StatusTypeDef MY_I2C_Master_ISR_DMA(struct __I2C_HandleTypeDef *hi2c, uint32_t ITFlags, uint32_t ITSources)
{
if (__HAL_I2C_GET_FLAG(&hi2c1, I2C_FLAG_TCR) == SET)
{
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_NO_STARTSTOP); // in repeated calls using I2C_NO_STARTSTOP
}
return HAL_OK;
}
With this approach DMA circular buffer size is not limited to 255 bytes:
#define I2C_BUFFER_SIZE 1024
uint8_t i2cBuffer[I2C_BUFFER_SIZE];
Main.c should have MY_I2C_TransferConfig() function, which is copy pasted version of private function HAL_I2C_TransferConfig() from stm32l4xx_hal_i2c.c. On earlier STM32 microcontrollers there is no NBYTES/RELOAD fields and I2C_CR2 does not need to be updated this way.
Using DMA in circular mode allows to achieve highest frame rate, you just need to fill DMA buffers in time using XferHalfCpltCallback and XferCpltCallback callbacks. Frames may be copied from larger buffer by using memcpy() or DMA MEMTOMEM transfer.
You haven't said which STM32 you are using. They have different bit definitions (because the I2C peripherals in the earlier released parts were rubbish) but it looks like you are using one of the later ones.
Basically you can find what you need in the bit definitions for the I2C registers in the reference manual. If you are setting stop before it has finished you need to look for a BUSY bit that gets cleared or BTF (byte transfer finished) bit that gets set when it is time for you to send stop.

STM32 ADC_DMA_UART data transfer

I am trying to implement the following scenario on STM32F103C8 Microcontroller.
On PB11 and PB10 I've LED and Button connected respectively. LED is blinking continuously 500ms, but when button is pressed it blinks with 100ms delay 20 times.
I have also connected UART (PA3-PA2) and Potentiometer on ADC (PA0). My task is to transfer ADC reading to UART in DMA mode.
LED and Button interrupt worked well, but as soon as i have added the code for ADC and USART handling it stopped working.
Could you please advice, where is my mistake in ADC-DMA-UART processing and how can i fix it?
Snippets from Main.c
//Buffer for ADC.
uint16_t buffer[5];
huart2.Instance->CR3 |= USART_CR3_DMAT;
//Transfer ADC reading to Buffer in DMA.
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)buffer, 5);
while (1)
{
//LED blinking
HAL_GPIO_TogglePin(GPIOB, LED_Pin);
HAL_Delay(500);
}
//ADC callback function - When buffer is full transfer to UART.
void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef* hadc) {
HAL_DMA_Start_IT(&hdma_usart2_tx, (uint32_t)buffer, (uint32_t)&huart2.Instance->DR, sizeof(buffer));
}
//Interrupt handler for Button.
void EXTI15_10_IRQHandler(void) {
HAL_GPIO_EXTI_IRQHandler(BT_Pin);
}
//Callback function for Button.
void HAL_GPIO_EXTI_Callback(uint16_t GPIO_Pin){
if(GPIO_Pin == BT_Pin){
for(volatile int i=20; i>0; i--){
HAL_GPIO_TogglePin(GPIOB, LED_Pin);
HAL_Delay(100);
}
}
The most likely reason to me is that the ADC interrupt handler (including ST library functions and the callback you presented) is triggered too frequently, so that the ISR of the EXTI triggered by the push button is suppressed (permanently or nearly permanently).
This can happen even more easily if you selected a minimal sample time and continuous conversion mode (because sampling and conversion then happen as often as it goes, and the IRQ that triggers your conversion-complete callback (HAL_ADC_ConvCmpltCallback()) might run all the time.
In order to verify/falsify my assumption, please inspect
your interrupt priorities for ADC and EXTI (and others you may have on the system)
what happens if you select a longer ADC sampling period, or if you slow down the clock source of the ADC (without slowing down the CPU clock, of course).
If this didn't fix your problem, you may be able post another, refined question.

can not read temp from ds18b20

I am using stm32 to read the ds18b20 with HAL library
I think the init is correct but the read and write is not
anyone can tell me why it is not right?
for the write,here is the code
if ((data & (1 << i)) != 0)
{
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
delay_ms(1);
MX_GPIO_Set(0);
delay_ms(60);
}
else
{
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
delay_ms(60);
MX_GPIO_Set(0);
}
it is write one bit data.
and for the read code
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
delay_ms(2);
MX_GPIO_Set(0);
if (HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_1) == GPIO_PIN_SET)
{
value |= 1 << i;
}
delay_ms(60);
the MX_GPIO_Set(1) means set the GPIO output
where is wrong?
please do not tell me use library or code in github.I want to write code myself so I can understand the ds18b20.
The DS18B20 uses the One-Wire protocol.
https://en.wikipedia.org/wiki/1-Wire
Each bit takes about 60 microseconds to transmit.
1s are HIGH during most of the transmission and 0s are LOW during most of the transmission. The start of the next bit is indicated by a pulse.
One thing that stands out to me is that you're using delay_ms (milliseconds), when you likely want to be using delay_us (microseconds).
Also, you're relying on the bit's timing to be exact (which it probably won't be). Instead, base your timing on the pulse.
It's more complicated than that.
When reading, you need to be continually checking the pin's value and interpreting what it means rather than putting in delays and hoping that the timing matches up.
I have not tested this code and it's incomplete.
This is just to illustrate a technique.
To start off, we're going to set our output to LOW and wait
for the sensor to go LOW for at least 200us. (Ideally 500us. 200us is our minimum requirement.)
This is the "RESET" sequence that tells us that new data is about to start.
const int SleepIntervalMircoseconds = 5;
// Start off by setting our output to LOW (aka GPIO_PIN_RESET).
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
// Switch back to reading mode.
MX_GPIO_Set(0);
const int ResetRequiredMiroseconds = 200;
int pinState = GPIO_PIN_SET;
int resetElapsedMicroseconds = 0;
while (pinState != GPIO_PIN_RESET || resetElapsedMicroseconds < ResetRequiredMiroseconds) {
pinState = HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_1);
if (pinState != GPIO_PIN_RESET) {
resetElapsedMicroseconds = 0;
continue;
}
delay_us(SleepIntervalMircoseconds);
// Note that the elapsed microseconds is really an estimate here.
// The actual elapsed time will be 5us + the amount of time it took to execute the code.
resetElapsedMicroseconds += SleepIntervalMircoseconds;
}
This only gets us started.
After you've received the reset signal, you need to indicate to the other side that you've received it by setting your value HIGH for certain amount of time.
I'm unable to comment on your code, because important parts like the GPIO setup and the source for the functions called are missing. However, if the bit timing gives you trouble, you might try this.
Using a UART to Implement a 1-Wire Bus Master
then you don't have to deal with delays and timings other than calculating the UART baud rate.
All STM32 UARTs support one wire operation with an open drain GPIO pin. Connect the I/O pin of the device to an UART TX pin. Configure the pin as open-drain alternate function output, with the alternate function number for the UART if applicable. Enable the UART and set it to single wire operation in the control registers.
Set the UART baud rate to 7407, and send 0xF0, that's the reset pulse. Wait for RXNE and read the UART data register. If it's not 0xF0, then the device is answering with a presence pulse.
Set the UART baud rate to 133333, and you can start communicating. To send a 0 bit, write 0x00 to the UART data register. To send a 1 bit, write 0xFF to the UART data register. To receive a bit, write 0xFF, wait for RXNE, and read the data register. If the byte read is 0xFF, then it's a 1, otherwise (any other value read) it's a 0.

How to get NSOutputStream to send or flush packets immediately

I am having an issue with latency when connecting to a bluetooth accessory using the External Accessory Framework. When sending data I get the following custom output in the console:
if( [stream hasSpaceAvailable] )
{
NSLog( #"Space avail" );
}
else {
NSLog(#"No space");
}
while( [stream hasSpaceAvailable] && ( [_outputBuffer length] > 0 ) )
{
/* write as many bytes as possible */
NSInteger written = [stream write:[_outputBuffer bytes] maxLength:[_outputBuffer length]];
NSLog( #"wrote %i out of %i bytes to the stream", written, [_outputBuffer length] );
if( written == -1 )
{
/* error, bad */
Log( #"Error writing bytes" );
break;
}
else if( written > 0 )
{
/* remove the bytes from the buffer that were written */
Log( #"erasing %i bytes", written );
[_outputBuffer replaceBytesInRange:NSMakeRange( 0, written ) withBytes:nil length:0 ];
}
}
This results with the following output where immediate pack buffer is the payload.
immediate pack buffer-> 040040008
Space avail
wrote 10 out of 10 bytes to the stream
immediate pack buffer-> 040010005
No space
immediate pack buffer-> 030040007
No space
wrote 20 out of 20 bytes to the stream
immediate pack buffer-> 030010004
No space
immediate pack buffer-> 040000004
Space avail
wrote 20 out of 20 bytes to the stream
immediate pack buffer-> 030000003
Space avail
wrote 10 out of 10 bytes to the stream
immediate pack buffer-> 040040008
Space avail
wrote 10 out of 10 bytes to the stream
Notice how it continually has "No Space" written which means that the method hasSpaceAvailable is returning false and forcing the data to be buffered until it returns true.
1) What I need to know is why is the happening? Is it waiting for an Ack from the BT hardware? If so how do you removing this blocking?
2) How do you do this so it sends immediately and we basically stream the data in real time without buffering?
3) Is there a hidden API method that will disable this blocking?
This is a real problem because there cannot be any delay/latency in sending the data to the device, it must be sent immediately in order for the hardware to be in sync with the iPhone commands. Please help.
What you're asking for is impossible with most hardware (which will finish sending the current packet before starting the next one), and impossible with the usual "stream" paradigm (which requires that data is received in order, so is bandwidth-limited).
It is also physically impossible to have zero latency unless the source and destination are coincident.
The actual problem seems to be that the underlying stream only queues one packet at a time, even if the packet is only 10 bytes long. I don't know why; possibly because it's intended as a very simple protocol.
The usual way of dealing with such a queue is to register for the appropriate delegate callbacks and send as much data as you can when the stream has space available, instead of waiting for the next time you attempt to send data (which appears to be what you're doing).
The problem is the HandleEvent delegate function is an asynchronous call.So every time it is not hitting the delegate.
What you can do is, have the collections of commands in an array at once, open the session, call the writeData Function.What happens here is, once the write data is called, you don't need the HandleEvent Function to be hit for every command.
Have a count incremented in writeData function for the count of array items,Until count == arrayItems, Delegate is not hit..
So all the commands from list are sent one by one.
I am facing the same issue but in different scenario.
Scenario: iPhone app is able to communicate the PED when gets connected for the first time. But when PED battery dies or switched off and then switched on, app is not able to communicate with PED in spite of active session and valid output stream. Output steam says its does not have spece to write anything.
Solution: When PED gets switched, app gets notified, and at that moment I make the app to kill EASession and create it again when PED gets connection. Not sure whether it is best solution. Please suggest another solution if there is any.