Interfacing TFT screen with STM32F446 using display bus interface - stm32

I'm trying to understand how to interface a TFT screen module with an STM32F4 chip on a custom PCB.
Here is the module and its basic info.
To write commands and data to the screen, the ILI9481 driver on the screen module uses the Display Bus Interface (DBI), where data is sent over 8 or 16 bits through data wires.
Looking at library examples, I understand (and please correct me, if I am wrong), that in order to send a command of one byte, it simply sets the digital pins of the chip high or low, depending on the command. For example, command 0x2 in 8bit communication would be 00000010, where 0 would be the digital low on the chips GPIO pin and 1 would be digital high, meaning 1 of 8 wires are active (logical high). I Hope, I understand this correctly.
Now as I looked over examples, usually these digital pins are on the same GPIO port. And if I understand correctly, GPIO ports have a register, called BSRR, where you can manipulate the logical levels of the pins of the GPIO port. If the data pins are all on the same GPIO port, I assume this would work (from the example, where c is the command byte):
void STM32_TFT_8bit::write8(uint8_t c) {
// BRR or BSRR avoid read, mask write cycle time
// BSRR is 32 bits wide. 1's in the most significant 16 bits signify pins to reset (clear)
// 1's in least significant 16 bits signify pins to set high. 0's mean 'do nothing'
TFT_DATA->regs->BSRR = ((~c)<<16) | (c); //Set pins to the 8 bit number
WR_STROBE;
}
However, on my PCB board, the data pins of the screen module are separated on different ports.
So, my question is, how would I do the same thing, send a command while manipulating the logical levels? I assume, that I could write set/reset my pins one by one, depending on the command, but how would it look with the BSRR registers?
If my data pins are as follows:
D0 -> PC12
D1 -> PC11
D2 -> PC10
D4 -> PA12
D5 -> PA11
D6 -> PA10
D7 -> PA9
Would a command of 0x9D (0b10011101) through the registers would look something like this? :
GPIOA->regs->BSRR = 0b0001101000000000; // A port: turn on PA9, PA11, PA12
GPIOC->regs->BSRR = 0b0001010000000000; // C port: turn on PC10 and PC12

how would it look with the BSRR registers?
A bitmask can be applied to the value that is written to the BSRR, for example like this:
/* set/reset selected GPIO output pins, ignore the rest */
static inline void _gpio_write(GPIO_TypeDef* GPIOx, uint16_t state, uint16_t mask)
{
GPIOx->BSRR = ((uint32_t)(~state & mask) << 16) | (state & mask);
}
The data bits need to be rearranged before writing them to the GPIO output registers, for example like this:
#define BITS(w,b) (((w) & (1 << (b))) >> (b))
/* write a data/command byte to the data bus DB[7:0] of custom ILI9481 board
used pin assignment: D0 -> PC12, D1 -> PC11, D2 -> PC10, (D3 -> PC1) (?)
D4 -> PA12, D5 -> PA11, D6 -> PA10, D7 -> PA9 */
static void _write_data_to_pins(uint8_t data)
{
const uint16_t mask_c = 1<<12 | 1<<11 | 1<<10 | 1<<1; /* 0x1c02 */
const uint16_t mask_a = 1<<12 | 1<<11 | 1<<10 | 1<<9; /* 0x1e00 */
_gpio_write(GPIOC, (uint16_t)(BITS(data, 0) << 12 | BITS(data, 1) << 11 |
BITS(data, 2) << 10 | BITS(data, 3) << 1), mask_c);
_gpio_write(GPIOA, (uint16_t)(BITS(data, 4) << 12 | BITS(data, 5) << 11 |
BITS(data, 6) << 10 | BITS(data, 7) << 9), mask_a);
}
Test:
/* just for testing: read the written data bits back and arrange them in a byte */
static uint8_t _read_data_from_pins(void)
{
const uint32_t reg_c = GPIOC->ODR;
const uint32_t reg_a = GPIOA->ODR;
return (uint8_t)(BITS(reg_c, 12) << 0 | BITS(reg_c, 11) << 1 |
BITS(reg_c, 10) << 2 | BITS(reg_c, 1) << 3 |
BITS(reg_a, 12) << 4 | BITS(reg_a, 11) << 5 |
BITS(reg_a, 10) << 6 | BITS(reg_a, 9) << 7);
}
/* somewhere in main loop of test project */
{
uint8_t d = 0xff;
do {
_write_data_to_pins(d);
if (d != _read_data_from_pins()) {
Error_Handler();
}
} while (d--);
}
(Note: Only 7 of the 8 data pins DB[7:0] were listed in the question, PC1 was assigned to data pin D3 here.)
(Note: Most of these bit-shifts can be easily optimized out by the compiler, use at least -O1 to get somewhat compact results with GCC.)
GPIOA->regs->BSRR = 0b0001101000000000; // A port: turn on PA9, PA11, PA12
GPIOC->regs->BSRR = 0b0001010000000000; // C port: turn on PC10 and PC12
These two code lines do what is stated in the comments. But they will leave all the other pins unchanged.
The resulting output will depend on the previous state of the output data register. - For the LOW data pins, the corresponding GPIO port bits in BSRR[31:16] need to be set to 1 in order to update all the 8-bit data bus lines at once.
To answer the actual question:
No, the output on the data bus will not be 0x9D (0b1001'1101) after writing the two quoted bit patterns to the two BSRR registers. - In my case, it would look like this (please correct me if I'm wrong):
/* write 0x9D (0b1001'1101) to the data bus
used pin assignment: D0 -> PC12, D1 -> PC11, D2 -> PC10, (D3 -> PC1) (?)
D4 -> PA12, D5 -> PA11, D6 -> PA10, D7 -> PA9 */
GPIOC->BSRR = 0x8001402; /* = 0b00001000'00000000'00010100'00000010 */
GPIOA->BSRR = 0xc001200; /* = 0b00001100'00000000'00010010'00000000 */

Suppose 'command' is the byte to send (I hope there is a strobe somewhere...).
I would simply make 8 lines of code like this (If I well understand the port registers of the STM32):
if (command & 1) GPIOC->BSRR |= 1 << 12; else GPIOC->BSRR &= ~(1 << 12);
if (command & 2) GPIOC->BSRR |= 1 << 11; else GPIOC->BSRR &= ~(1 << 11);
...
if (command & 128) GPIOA->BSRR |= 1 << 9; else GPIOA->BSRR &= ~(1 << 9);
Maybe this is quite raw, but it works and it is easy to understand (i.e. more difficult to make typos). Next time tell the hardware designer to arrange wires just a little better... it is hard to think at something worse than this, the bits seem reversed just to see if the software guy can cope with them!

Related

Controlling STM32F3 GPIOs without the Cube MX libraries

I am adapting this bootloader for STM32F373CC to my device. To indicate that the device is powered but in bootloader mode, I'd like to turn on some of the status LEDs. However, this bootloader doesn't use the STM Cube MX libraries, so I have to code it low-level. The header file stm32f373xc.h is included, so I can use expressions like GPIOB_BASE.
I tried the following first thing in main(), but unfortunately it doesn't work:
// turn on GPIOB clock: SET_BIT(RCC->AHBENR, RCC_AHBENR_GPIOBEN);
uint32_t* rcc = (uint32_t*)RCC_BASE;
*(rcc+0x14) |= RCC_AHBENR_GPIOBEN; // AHBENR is at offset 0x14
// configure Port B, pins 4 and 5 to GPIO, Open Drain, low.
uint32_t* gpiob = (uint32_t*)GPIOB_BASE;
*(gpiob) |= 0x500; // GPIO output mode --- GPIOB_MODER = 0x500; (bits 11:8 = 0101), offset 0
*(gpiob) &= ~0xA00;
*(gpiob+0x04) |= 0x30; // output type open drain --- GPIOB_OTYPER = 0x30; (bits 5:4 = 11), offset 0x04
*(gpiob+0x0c) &= ~0xF00; // pull up/down off --- GPIOB_PUPDR = 0x0; (bits 11:8 = 0000), offset 0x0c
*(gpiob+0x14) &= ~0x30; // output low --- GPIOB_ODR = 0x0; (bits 5:4 = 00), offset 0x14
Any ideas what I'm missing? How can I find out if the problem is the clocking of the Port B, or the pin configuration?
I found this similar post, but the first answer requires the entire CMSIS, and the second answer lacks comments, so I don't fully understand what they are doing.
I hope that you know that open-drain outputs require pull-up (internal or external)
Use CMSIS definitions, not magic numbers and operations.
requires the entire CMSIS
And what is the problem? CMSIS does not add any overhead to your code, only handy definitions and inline functions, which do not change the size of the code if not used.
Also, HAL has very handy macros useful even if you do not use HAL library itself (it also will not increase the code size even by a single byte)
I will not check your magic offsets and numbers.
First error: after enabling the peripheral clock you need to wait. It is described in the Reference Manual. You do not wait and your first MODER operation has no effect. HAL macros read back the register to make sure that the operation has completed.
Example from STM32L4:
#define __HAL_RCC_GPIOB_CLK_ENABLE() do { \
__IO uint32_t tmpreg; \
SET_BIT(RCC->AHB2ENR, RCC_AHB2ENR_GPIOBEN); \
/* Delay after an RCC peripheral clock enabling */ \
tmpreg = READ_BIT(RCC->AHB2ENR, RCC_AHB2ENR_GPIOBEN); \
UNUSED(tmpreg); \
} while(0)
Then use the CMSIS registers typedefs and definitions.
#define PIN4 4
#define PIN5 5
GPIOB -> MODER &= ~((0b11 << (2 * PIN5)) | (0b11 << (2 * PIN4)));
GPIOB -> MODER |= ((0b01 << (2 * PIN5)) | (0b01 << (2 * PIN4)));
GPIOB -> OTYPER &= ~((1 << PIN4) | (1 << PIN5));
GPIOB -> OTYPER |= (1 << PIN4) | (1 << PIN5);
GPIOB -> BSRR = (1 << (PIN4 + 16)) | (1 << (PIN5 + 16)); // set the pins low

STM32: USART alternative function not working

I'm using the stm32f767zi, and I'm trying to send test data over the USART peripheral. I've done the same configuration as I always do on any device, but this time it does not output anything... I cannot find the mistake, can someone help ?
The clock setup
// Enables TIM8 (Delay), USART1 (STDOUT)
RCC->APB2ENR |= (RCC_APB2ENR_TIM8EN
| RCC_APB2ENR_USART1EN);
// Enables GPIOA, GPIOB, GPIOC, GPIOD, GPIOE, GPIOF, DMA1
RCC->AHB1ENR |= (RCC_AHB1ENR_GPIOAEN
| RCC_AHB1ENR_GPIOBEN
| RCC_AHB1ENR_GPIOCEN
| RCC_AHB1ENR_GPIODEN
| RCC_AHB1ENR_GPIOEEN
| RCC_AHB1ENR_GPIOFEN
| RCC_AHB1ENR_DMA1EN);
The initialization code
// Makes A8 (TX) and A9 (RX) Alternative Function
GPIOA->MODER &= ~(GPIO_MODER_MODER8_Msk
| GPIO_MODER_MODER9_Msk);
GPIOA->MODER |= ((0x2 << GPIO_MODER_MODER8_Pos)
| (0x2 << GPIO_MODER_MODER9_Pos));
// Selects AF7 for both A8 (TX) and A9 (RX).
GPIOA->AFR[1] &= ~(GPIO_AFRH_AFRH0_Msk
| GPIO_AFRH_AFRH1_Msk);
GPIOA->AFR[1] |= ((7 << GPIO_AFRH_AFRH0_Pos)
| (7 << GPIO_AFRH_AFRH1_Pos));
// Selects very high speed for A8 (TX) and A9 (RX)
GPIOA->OSPEEDR &= ~(GPIO_OSPEEDR_OSPEEDR8_Msk
| GPIO_OSPEEDR_OSPEEDR9_Msk);
GPIOA->OSPEEDR |= ((0x3 << GPIO_OSPEEDR_OSPEEDR8_Pos)
| (0x3 << GPIO_OSPEEDR_OSPEEDR9_Pos));
// Calculates and sets the baud rate.
m_USART->BRR = (((2 * clk) + baud) / (2 * baud));
// Configures the USART peripheral further.
m_USART->CR1 = USART_CR1_TE // Transmit Enable
| USART_CR1_RE // Receive Enable
| USART_CR1_UE; // USART Enable (EN)
and the write function:
*reinterpret_cast<uint8_t *>(m_USART->TDR) = c;
while (!(m_USART->ISR & USART_ISR_TC));
The problem you have is that you set the wrong pins. It should be PA9 & PA10
your write is also wrong
it should be :
*reinterpret_cast<volatile uint8_t *>(&m_USART->TDR) = c;
or C style:
*(volatile uint8_t *)(&m_USART->TDR) = c;
you are also checking the wrong flag.
TC is important if you want to disable the peripheral after the transition. In normal conditions use TXE flag instead.
while (!(m_USART->ISR & USART_ISR_TXE));
Your reinterpret_cast is unnecessary and incorrect.
I assume you actually wrote *reinterpret_cast<uint8_t *>(&m_USART->TDR) = c; but that is still wrong.
The person who wrote the standard device header has taken great care to make sure that USARTx->TDR already has the correct type, I strongly advise you to trust them and not cast it! In this particular case they will have made it volatile, and you have not, so it is possible that the compiler thinks it can make an optimization by not bothering to perform a write to something that you never read back.
The reason you probably got away with this on other STM32 parts is that their UARTs have just DR for both transmit and receive so reading DR for reception made the compiler think it couldn't eliminate the write.
Also, I don't know about this part, but many STM32 need an extra cycle between writing to RCC->xxxENR and using the respective peripheral, this is usually done with a read of the same register, eg:
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOAEN;
(void)RCC->AHB1ENR;
// now safe to access GPIOA registers

STM32 SPI data is sent the reverse way

I've been experimenting with writing to an external EEPROM using SPI and I've had mixed success. The data does get shifted out but in an opposite manner. The EEPROM requires a start bit and then an opcode which is essentially a 2-bit code for read, write and erase. Essentially the start bit and the opcode are combined into one byte. I'm creating a 32-bit unsigned int and then bit-shifting the values into it. When I transmit these I see that the actual data is being seen first and then the SB+opcode and then the memory address. How do I reverse this to see the opcode first then the memory address and then the actual data. As seen in the image below, the data is BCDE, SB+opcode is 07 and the memory address is 3F. The correct sequence should be 07, 3F and then BCDE (I think!).
Here is the code:
uint8_t mem_addr = 0x3F;
uint16_t data = 0xBCDE;
uint32_t write_package = (ERASE << 24 | mem_addr << 16 | data);
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
HAL_SPI_Transmit(&hspi1, &write_package, 2, HAL_MAX_DELAY);
HAL_Delay(10);
}
/* USER CODE END 3 */
It looks like as your SPI interface is set up to process 16 bit halfwords at a time. Therefore it would make sense to break up the data to be sent into 16 bit halfwords too. That would take care of the ordering.
uint8_t mem_addr = 0x3F;
uint16_t data = 0xBCDE;
uint16_t write_package[2] = {
(ERASE << 8) | mem_addr,
data
};
HAL_SPI_Transmit(&hspi1, (uint8_t *)write_package, 2, HAL_MAX_DELAY);
EDIT
Added an explicit cast. As noted in the comments, without the explicit cast it wouldn't compile as C++ code, and cause some warnings as C code.
You're packing your information into a 32 bit integer, on line 3 of your code you have the decision about which bits of data are placed where in the word. To change the order you can replace the line with:
uint32_t write_package = ((data << 16) | (mem_addr << 8) | (ERASE));
That is shifting data 16 bits left into the most significant 16 bits of the word, shifting mem_addr up by 8 bits and or-ing it in, and then adding ERASE in the least significant bits.
Your problem is the Endianness.
By default the STM32 uses little edian so the lowest byte of the uint32_t is stored at the first adrress.
If I'm right this is the declaration if the transmit function you are using:
HAL_StatusTypeDef HAL_SPI_Transmit(SPI_HandleTypeDef *hspi, uint8_t *pData, uint16_t Size, uint32_t Timeout)
It requires a pointer to uint8_t as data (and not a uint32_t) so you should get at least a warning if you compile your code.
If you want to write code that is independent of the used endianess, you should store your data into an array instead of one "big" variable.
uint8_t write_package[4];
write_package[0] = ERASE;
write_package[1] = mem_addr;
write_package[2] = (data >> 8) & 0xFF;
write_package[3] = (data & 0xFF);

PCI configuration space registers - write values

I am developing a network driver (RTL8139) for a selfmade operating system and have problems in writing values to the PCI configuration space registers.
I want to change the value of the interrupt line (offset 0x3c) to get another IRQ number and enable bus master (set bit 2) of the command register (offset 0x04).
When I read back the values I see that my values are correctly written. But instead of using the new IRQ number (in my case 6) it uses the old value (11).
Also bus master for DMA is not working (my packets I would like to send have the correct size (set by IO-Port) but they have no content (all values just 0, transceive buffer is on physical memory and has non zero values). This always worked with my code as expected after I double checked the physical address. I need this to let the network controller access to my physical memory where my buffers for receive/transceive are. (Bus mastering needed for RTL8139)
Do I have to do something else to confirm my changes to the PCI-device?
As emulator I use qemu.
For reading/writing I wrote the following functions:
uint16_t pci_read_word(uint16_t bus, uint16_t slot, uint16_t func, uint16_t offset)
{
uint64_t address;
uint64_t lbus = (uint64_t)bus;
uint64_t lslot = (uint64_t)slot;
uint64_t lfunc = (uint64_t)func;
uint16_t tmp = 0;
address = (uint64_t)((lbus << 16) | (lslot << 11) |
(lfunc << 8) | (offset & 0xfc) | ((uint32_t)0x80000000));
outportl (0xCF8, address);
tmp = (uint16_t)((inportl (0xCFC) >> ((offset & 2) * 8)) & 0xffff);
return (tmp);
}
uint16_t pci_write_word(uint16_t bus, uint16_t slot, uint16_t func, uint16_t offset, uint16_t data)
{
uint64_t address;
uint64_t lbus = (uint64_t)bus;
uint64_t lslot = (uint64_t)slot;
uint64_t lfunc = (uint64_t)func;
uint32_t tmp = 0;
address = (uint64_t)((lbus << 16) | (lslot << 11) |
(lfunc << 8) | (offset & 0xfc) | ((uint32_t)0x80000000));
outportl (0xCF8, address);
tmp = (inportl (0xCFC));
tmp &= ~(0xFFFF << ((offset & 0x2)*8)); // reset the word at the offset
tmp |= data << ((offset & 0x2)*8); // write the data at the offset
outportl (0xCF8, address); // set address again just to be sure
outportl(0xCFC,tmp); // write data
return pci_read_word(bus,slot,func,offset); // read back data;
}
I hope somebody can help me.
The "interrupt line" field (at offset 0x03C in PCI configuration space) literally does nothing.
The Full Story
PCI cards could use up to 4 "PCI IRQs" at the PCI slot; and use them in order (so if you have ten PCI cards that all have one IRQ, then they'll all use the first PCI IRQ at the slot).
There's a tricky "barber pole" arrangement to connect "PCI IRQ at the slot" to "PCI IRQ at the host controller" that is designed to reduce IRQ sharing. If you have ten PCI cards that all have one IRQ, then they'll all use the first PCI IRQ at the slot, but the first IRQ at each PCI slot will be connected to a different PCI IRQ at the host controller. All of this is hard-wired and can not be changed by software.
To complicate things more; to connect PCI IRQs (from the PCI host controller) to the legacy PIC chips, a special "PCI IRQ router" was added. In theory the configuration of the "PCI IRQ router" can be changed by software (if you can find the documentation that describes a table that describes the location, capabilities and restrictions of the "PCI IRQ router").
Without the firmware's help it'd be impossible for an OS to figure out which PIC chip input a PCI device actually uses. For that reason, the firmware figures it out during boot and then stores "PIC chip input number" somewhere for the OS to find. That is what the "interrupt line" register is - it's just an 8-bit register that can store anything you like (that the BIOS/firmware uses to store "PIC chip input number").

Usage of two DMA ADC channels in dual regular simultaneous mode STM32

I want to implement dual regular simultaneous mode of ADC1,ADC2 and two DMA ADC channels of stm32f303 discovery.
In CubeMX examples:
Usage of two DMA channels (one for ADC master, one for ADC slave) is
also possible: this is the recommended configuration in case of high
ADC conversions rates and applications using other DMA channels
intensively.
According to AN4195
When using the DMA, there are two possible cases: • Use of two
separate DMA channels for master and slave. Each ADC (in this case,
the MDMA[1:0]) must be kept cleared. The first DMA channel is used to
read the master ADC converted data from ADC_DR, and the DMA requests
are generated at each EOC event of the master ADC. The second DMA
channel is used to read the slave ADC converted data from ADC_DR, and
the DMA requests are generated at each EOC event of the slave ADC.
For 1 channel the code:
HAL_ADCEx_Calibration_Start(&hadc1, ADC_SINGLE_ENDED);
HAL_ADCEx_Calibration_Start(&hadc2, ADC_SINGLE_ENDED);
HAL_ADC_Start(&hadc2);
HAL_ADCEx_MultiModeStart_DMA(&hadc1, (uint32_t*)buffer, 3);
But how can we run 2 channels? HAL_ADCEx_MultiModeStart_DMA is for 1 channel as I can understand
Something like for independent mode is not working
HAL_ADCEx_Calibration_Start(&hadc1, ADC_SINGLE_ENDED);
HAL_ADCEx_Calibration_Start(&hadc2, ADC_SINGLE_ENDED);
HAL_ADC_Start(&hadc2);
HAL_ADC_Start_DMA(&hadc1,(uint32_t*)ADC1_data,sizeof(ADC1_data)/sizeof(ADC1_data[0]));
HAL_ADC_Start_DMA(&hadc2,(uint32_t*)ADC2_data,sizeof(ADC2_data)/sizeof(ADC2_data[0]));
I'm not 100% sure, if this also applies to the F3 series, but I've written this toturial for the F103C8 regarding ADC dual regular simultanous mode:
http://www.bepat.de/2020/11/27/stm32f103c8-adc-dual-regular-simultaneous-mode/
maybe you'll find it helpfull.
To cut a long story short: I guess you are starting the ADCs the wrong way.
ADC2 needs to be started in normal mode BEFORE ADC1 starts with
HAL_ADC_Start(&hadc2);
ADC1 is started afterwards with:
HAL_ADCEx_MultiModeStart_DMA(&hadc1, ADC_Buffer, ADC_BufferSize);
Funny - great HAL library :). This is my working code: interleaved mode - one DMA transfer half word per two conversions (master and slave). 8-bit resolution. Register version.
DMA1_Channel1 -> CPAR = (uint32_t)&(ADC12_COMMON -> CDR);
DMA1_Channel1 -> CMAR = (uint32_t)&obuff[0][0];
DMA1_Channel1 -> CNDTR = 1 * 1024;
DMA1_Channel1 -> CCR = DMA_CCR_MINC | DMA_CCR_TCIE | DMA_CCR_EN | DMA_CCR_MSIZE_0 | DMA_CCR_PSIZE_0 | DMA_CCR_TEIE | (DMA_CCR_PL_Msk);
ADC12_COMMON -> CCR = (0b11 << ADC12_CCR_MDMA_Pos) | (0b111 << ADC12_CCR_MULTI_Pos);
ADC1 -> CFGR = ADC_CFGR_DMAEN | (0b10 << ADC_CFGR_RES_Pos);
ADC1 -> CFGR &= ~(ADC_CFGR_EXTEN_Msk | ADC_CFGR_EXTSEL_Msk); // software trigger only, converting as fast as possible
ADC1 -> CFGR |= ADC_CFGR_CONT;
ADC1 -> SMPR1 = 0;
ADC1 -> SMPR2 = 0;
ADC1 -> SQR1 &= ~(ADC_SQR1_L_Msk);
ADC1 -> SQR1 &= ~(ADC_SQR1_SQ1_Msk);
ADC1 -> SQR1 |= (1 << ADC_SQR1_SQ1_Pos);
ADC2 -> CFGR = ADC_CFGR_DMAEN | (0b10 << ADC_CFGR_RES_Pos);
ADC2 -> SMPR1 = 0;
ADC2 -> SMPR2 = 0;
ADC2 -> SQR1 &= ~(ADC_SQR1_L_Msk);
ADC2 -> SQR1 &= ~(ADC_SQR1_SQ1_Msk);
ADC2 -> SQR1 |= (1 << ADC_SQR1_SQ1_Pos);
ADC1 -> CR |= ADC_CR_ADSTART;
DMA1_Channel1 interrupt is called when DMA finishes transfers.