two wire ADC, SPI read - interface

I use Arduino UNO (Atmega328) and a 12bit ADC component (AD7893ANZ-10), datasheet available on https://www.analog.com/media/en/technical-documentation/data-sheets/AD7893.pdf
The problem:
I tried already few different codes but the read value is wrong always, even when the SDATA pin of ADC is not connected to MISO pin of Arduino, I get the same values (See figure1 here). I simulated it in proteus(See figure2 here) and used the virtual serial monitor in proteus. The MOSI and SS pin of Arduino are not connected but I set SS pin in code to LOW and HIGH to fullfill libraries requirements. More information about the timing of ADC is added as comments into the code below. Or availabe in the datasheet. I would be thanksfull if you take a look on it due I cant figure out what I did wrong.
PS: The ADC has just to pins to communicate with SPI: 1.SDATA(slaveout) and 2.SCLK. The pin CONVST on ADC is to initiate a conversion.
#include <SPI.h>
//source of code https://www.gammon.com.au/spi
void setup() {
Serial.begin (115200);
pinMode(7, OUTPUT); // this pin is connected to convst on adc to initiate conversion
// Put SCK, MOSI, SS pins into output mode (introductions from source)
// also put SCK, MOSI into LOW state, and SS into HIGH state.
// Then put SPI hardware into Master mode and turn SPI on
pinMode(SCK, OUTPUT);
pinMode(MOSI, OUTPUT);
pinMode(SS, OUTPUT);
digitalWrite(SS, HIGH);
digitalWrite(SCK, LOW);
digitalWrite(MOSI, LOW);
SPCR = (1<<MSTR);
SPI.begin ();
SPI.beginTransaction(SPISettings(2000000, MSBFIRST, SPI_MODE1)); // set the clk frequency; take the MSB first;
//mode1 for CPOL=0 and CPHA=1 according to datasheet p.9 "CPOL bit set to a logic zero and its CPHA bit set to a logic one."
}
int transferAndWait (const byte what) //methode to read data
{
int a = SPI.transfer(what); //send zero(MOSI not connected)and read the first 8 bits and save
Serial.println (a); // show the value, in serial monitor -1
a =(a << 8); //shift the saved first 8 bits to left
Serial.println (a); // show the value, in serial monitor 255
a = a | SPI.transfer(what); //read and save the last 8 bits
Serial.println (a); // show the value, in serial monitor -256
delayMicroseconds (10);
return a;
}
void loop() {
int k;
digitalWrite(7, HIGH); //set pin high to initiate the conversion
delayMicroseconds(9); //the conversion time needed, actually 6 mikroseconds
digitalWrite(SS, LOW); // enable Slave Select to get the library working
k = transferAndWait (0); //call the method
delayMicroseconds(1);
digitalWrite(7, LOW);
digitalWrite(SS, HIGH); //disable chip select
delay(2000); //delay just to keep the overview on serial monitor
Serial.println (k); // show the value, in serial monitor -1
}

First the the return variable should be an unsigned int instead of a signed int.
Also the CONVST should only be low for a short period as conversion is started afterwards. See Timing sequence.

Related

STM32 Use DMA to generate bit pattern on GPIO PIN

I am trying to generate a bit pattern on a GPIO pin. I have set-up the DMA engine to transfer from an array of GPIO pin states to the GPIO BSRR register
Here is the code I am using to configure the DMA
hdma_tim16_ch1_up.Instance = DMA1_Channel3;
hdma_tim16_ch1_up.Init.Direction = DMA_PERIPH_TO_MEMORY;
hdma_tim16_ch1_up.Init.PeriphInc = DMA_PINC_DISABLE;
hdma_tim16_ch1_up.Init.MemInc = DMA_MINC_ENABLE;
hdma_tim16_ch1_up.Init.PeriphDataAlignment = DMA_PDATAALIGN_WORD;
hdma_tim16_ch1_up.Init.MemDataAlignment = DMA_MDATAALIGN_WORD;
hdma_tim16_ch1_up.Init.Mode = DMA_NORMAL;
hdma_tim16_ch1_up.Init.Priority = DMA_PRIORITY_LOW;
if (HAL_DMA_Init(&hdma_tim16_ch1_up) != HAL_OK)
{
Error_Handler();
}
/* Several peripheral DMA handle pointers point to the same DMA handle.
Be aware that there is only one channel to perform all the requested DMAs. */
__HAL_LINKDMA(tim_baseHandle,hdma[TIM_DMA_ID_CC1],hdma_tim16_ch1_up);
__HAL_LINKDMA(tim_baseHandle,hdma[TIM_DMA_ID_UPDATE],hdma_tim16_ch1_up);
Here is the code I use to setup the transfer:
uint32_t outputbuffer[] = {
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000
/* ... */
};
if (HAL_DMA_Start_IT(htim16.hdma[TIM_DMA_ID_UPDATE], (uint32_t)outputbuffer, (uint32_t)&GPIOG->BSRR, 14) != HAL_OK)
{
/* Return error status */
return HAL_ERROR;
}
__HAL_TIM_ENABLE_DMA(&htim16,TIM_DMA_UPDATE);
HAL_TIM_Base_Start_IT(&htim16);
I am expecting to see every time the counter overflows, the DMA transfers 32 bits from the array and increments to the next array position until the DMA CNDTR register reads 0.
I set up a GPIO pin to toggle every time the timer over flows and I setup an alternating bit pattern in the array. I would expect the two GPIO pins to be similar in their output shape but I get one longer pulse on the line connected to the DMA. Any tips would be greatly appreciated
configure TIM2 as input capture direct mode (TIM2_CH1)
configure TIM2 DMA direction "memory to peripheral"
configure TIM2 data width Half word / Half word
configure GPIO pins as GPIO_OUTPUT, for example 16 pins GPIOD0..GPIOD15
copy and paste HAL_TIM_IC_Start_DMA() function from HAL library and give it a new name MY_TIM_IC_Start_DMA()
find HAL_DMA_Start_IT() function call in MY_TIM_IC_Start_DMA()
replace (uint32_t)&htim->Instance->CCR1 with (uint32_t)&GPIOD->ODR
if (HAL_DMA_Start_IT(htim->hdma[TIM_DMA_ID_CC1], (uint32_t)&GPIOD->ODR, (uint32_t)pData, Length) != HAL_OK)
Now you can start DMA to GPIO transfer by calling
MY_TIM_IC_Start_DMA(&htim2, TIM_CHANNEL_1,(uint32_t*)gpioBuffer,GPIO_BUFFER_SIZE);
Actual transfer must be triggered by providing pulses on TIM2_CH1 input pin (for example, by using output compare pin from other timer channel). Those pulses originally was used to save Timer2 CCR1 register values to DMA buffer. Code was tweaked to transfer DMA buffer value to GPIOD ODR register.
For GPIO to Memory transfer change TIM2 DMA direction to "peripheral to memory", configure GPIO pins as GPIO_INPUT and use GPIOD->IDR instead of ODR in HAL_DMA_Start_IT parameters in modified MY_TIM_IC_Start_DMA() function.

Disabling SPI peripheral on STM32H7 between two transmissions?

I'm still doing SPI experiments between two Nucleo STM32H743 boards.
I've configured SPI in Full-Duplex mode, with CRC enabled, with a SPI frequency of 25MHz (so Slave can transmit without issue).
DSIZE is 8 bits and FIFO threshold is 4.
On Master side, I'm sending 4 bytes then wait for 5 bytes from the Slave. I know I could use half-duplex or simplex mode but I want to understand what's going on in full-duplex mode.
volatile unsigned long *CR1 = (unsigned long *)0x40013000;
volatile unsigned long *CR2 = (unsigned long *)0x40013004;
volatile unsigned long *TXDR = (unsigned long *)0x40013020;
volatile unsigned long *RXDR = (unsigned long *)0x40013030;
volatile unsigned long *SR = (unsigned long *)0x40013014;
volatile unsigned long *IFCR = (unsigned long *)0x40013018;
volatile unsigned long *TXCRC = (unsigned long *)0x40013044;
volatile unsigned long *RXCRC = (unsigned long *)0x40013048;
volatile unsigned long *CFG2 = (unsigned long *)0x4001300C;
unsigned long SPI_TransmitCommandFullDuplex(uint32_t Data)
{
// size of transfer (TSIZE)
*CR2 = 4;
/* Enable SPI peripheral */
*CR1 |= SPI_CR1_SPE;
/* Master transfer start */
*CR1 |= SPI_CR1_CSTART;
*TXDR = Data;
while ( ((*SR) & SPI_FLAG_EOT) == 0 );
// clear flags
*IFCR = 0xFFFFFFFF;
// disable SPI
*CR1 &= ~SPI_CR1_SPE;
return 0;
}
void SPI_ReceiveResponseFullDuplex(uint8_t *pData)
{
unsigned long temp;
// size of transfer (TSIZE)
*CR2 = 5;
/* Enable SPI peripheral */
*CR1 |= SPI_CR1_SPE;
/* Master transfer start */
*CR1 |= SPI_CR1_CSTART;
*TXDR = 0;
*((volatile uint8_t *)TXDR) = 0;
while ( ((*SR) & SPI_FLAG_EOT) == 0 );
*((uint32_t *)pData) = *RXDR;
*((uint8_t *)(pData+4)) = *((volatile uint8_t *)RXDR);
// clear flags
*IFCR = 0xFFFFFFFF;
// disable SPI
*CR1 &= ~SPI_CR1_SPE;
return temp;
}
This is working fine (both functions are just called in sequence in the main).
Then I tried to remove the SPI disabling between the two steps (ie. I don't clear and set again the bit SPE) and I got stuck in function SPI_ReceiveResponseFullDuplex in the while.
Is it necessary to disable SPI between two transmissions or did I make a mistake in the configuration ?
The behaviour of SPE bit is not very clear in the reference manual. For example is it written clearly that, in half-duplex mode, the SPI has to be disabled to change the direction of communication. But nothing in fuill-duplex mode (or I missed it).
This errata item might be relevant here.
Master data transfer stall at system clock much faster than SCK
Description
With the system clock (spi_pclk) substantially faster than SCK (spi_ker_ck divided by a prescaler), SPI/I2S master data transfer can stall upon setting the CSTART bit within one SCK cycle after the EOT event (EOT flag raise) signaling the end of the previous transfer.
Workaround
Apply one of the following measures:
• Disable then enable SPI/I2S after each EOT event.
• Upon EOT event, wait for at least one SCK cycle before setting CSTART.
• Prevent EOT events from occurring, by setting transfer size to undefined (TSIZE = 0)
and by triggering transmission exclusively by TXFIFO writes.
Your SCK frequency is 25 MHz, the system clock can be 400 or 480MHz, 16 or 19 times SCK. When you remove the lines clearing and setting SPE, only these lines remain in effect after detecting EOT
*IFCR = 0xFFFFFFFF;
*CR2 = 5;
*CR1 |= SPI_CR1_CSTART;
When this sequence (quite probably) takes less than 16 clock cycles, then there is the problem described above. Looks like someone did a sloppy work again at the SPI clock system. What you did first, clearing and setting SPE is one of the recommended workarounds.
I would just set TSIZE=9 at the start, then write the command and the dummy bytes in one go, it makes no difference in full-duplex mode.
Keep in mind that in full duplex mode, another 4 bytes are received which must be read and discarded before getting the real answer. This was not a problem with your original code, because clearing SPE discards data still in the receive FIFO, but it would become one if the modified code worked, e.g there were some more delay before enabling CSTART again.

PCI configuration space registers - write values

I am developing a network driver (RTL8139) for a selfmade operating system and have problems in writing values to the PCI configuration space registers.
I want to change the value of the interrupt line (offset 0x3c) to get another IRQ number and enable bus master (set bit 2) of the command register (offset 0x04).
When I read back the values I see that my values are correctly written. But instead of using the new IRQ number (in my case 6) it uses the old value (11).
Also bus master for DMA is not working (my packets I would like to send have the correct size (set by IO-Port) but they have no content (all values just 0, transceive buffer is on physical memory and has non zero values). This always worked with my code as expected after I double checked the physical address. I need this to let the network controller access to my physical memory where my buffers for receive/transceive are. (Bus mastering needed for RTL8139)
Do I have to do something else to confirm my changes to the PCI-device?
As emulator I use qemu.
For reading/writing I wrote the following functions:
uint16_t pci_read_word(uint16_t bus, uint16_t slot, uint16_t func, uint16_t offset)
{
uint64_t address;
uint64_t lbus = (uint64_t)bus;
uint64_t lslot = (uint64_t)slot;
uint64_t lfunc = (uint64_t)func;
uint16_t tmp = 0;
address = (uint64_t)((lbus << 16) | (lslot << 11) |
(lfunc << 8) | (offset & 0xfc) | ((uint32_t)0x80000000));
outportl (0xCF8, address);
tmp = (uint16_t)((inportl (0xCFC) >> ((offset & 2) * 8)) & 0xffff);
return (tmp);
}
uint16_t pci_write_word(uint16_t bus, uint16_t slot, uint16_t func, uint16_t offset, uint16_t data)
{
uint64_t address;
uint64_t lbus = (uint64_t)bus;
uint64_t lslot = (uint64_t)slot;
uint64_t lfunc = (uint64_t)func;
uint32_t tmp = 0;
address = (uint64_t)((lbus << 16) | (lslot << 11) |
(lfunc << 8) | (offset & 0xfc) | ((uint32_t)0x80000000));
outportl (0xCF8, address);
tmp = (inportl (0xCFC));
tmp &= ~(0xFFFF << ((offset & 0x2)*8)); // reset the word at the offset
tmp |= data << ((offset & 0x2)*8); // write the data at the offset
outportl (0xCF8, address); // set address again just to be sure
outportl(0xCFC,tmp); // write data
return pci_read_word(bus,slot,func,offset); // read back data;
}
I hope somebody can help me.
The "interrupt line" field (at offset 0x03C in PCI configuration space) literally does nothing.
The Full Story
PCI cards could use up to 4 "PCI IRQs" at the PCI slot; and use them in order (so if you have ten PCI cards that all have one IRQ, then they'll all use the first PCI IRQ at the slot).
There's a tricky "barber pole" arrangement to connect "PCI IRQ at the slot" to "PCI IRQ at the host controller" that is designed to reduce IRQ sharing. If you have ten PCI cards that all have one IRQ, then they'll all use the first PCI IRQ at the slot, but the first IRQ at each PCI slot will be connected to a different PCI IRQ at the host controller. All of this is hard-wired and can not be changed by software.
To complicate things more; to connect PCI IRQs (from the PCI host controller) to the legacy PIC chips, a special "PCI IRQ router" was added. In theory the configuration of the "PCI IRQ router" can be changed by software (if you can find the documentation that describes a table that describes the location, capabilities and restrictions of the "PCI IRQ router").
Without the firmware's help it'd be impossible for an OS to figure out which PIC chip input a PCI device actually uses. For that reason, the firmware figures it out during boot and then stores "PIC chip input number" somewhere for the OS to find. That is what the "interrupt line" register is - it's just an 8-bit register that can store anything you like (that the BIOS/firmware uses to store "PIC chip input number").

DM6446 GPIO Bank 0 request_irq returns -22

I'm trying to set up an interrupt-handler in my driver for DM6446 GPIO BANK 0 interrupt.But request_irq returns -22.I know the Interrupt number for GPIO BANK-0 from the data sheet which states it to be 56.Following are the settings for GPIO in my code.I want to get interrupt on GPIO-10.
while((REG_VAL(PTSTAT) & 0x1) != 0); // Wait for power state transtion to finish
REG_VAL(MDCTL26) = 0x00000203; //To enable GPIO module and EMURSITE BIT as stated in sprue14 for state transition
REG_VAL(PTCMD) = 0x1; // Start power state transition for ALWAYSON
while((REG_VAL(PTSTAT) & 0x1) != 0); // Wait for power state transtion to finish
REG_VAL(PINMUX0) = REG_VAL(PINMUX0) & 0x80000000; //Disbale other Functionlaity on BANK 0 pins
printk(KERN_DEBUG "I2C: PINMUX0 = %x\n",REG_VAL(PINMUX0));
REG_VAL(DIR01) = REG_VAL(DIR01) | 0xFFFFFFFF; //Set direction as input for GPIO 0 and 10
REG_VAL(BINTEN) = REG_VAL(BINTEN) | 0x00000001; //Enable Interrupt for GPIO Bank 0
REG_VAL(SET_RIS_TRIG01) = REG_VAL(SET_RIS_TRIG01) | 0x00000401; // Enable rising edge interrupt of GPIO BANK 0 PIN 0 PIN 10
REG_VAL(CLR_FAL_TRIG01) = REG_VAL(CLR_FAL_TRIG01) | 0x00000401; // Disable falling edge interrupt of Bank 0
Result = request_irq(56,Gpio_Interrupt_Handler,0,"gpio",I2C_MAJOR);
if(Result < 0)
{
printk(KERN_ALERT "UNABLE TO REQUEST GPIO IRQ %d ",Result);
}
A little help shall be appreciated.
Thank you.
I have tried the gpio_to_irq as well for PIN-10 of BANK-0 but it returns irq no to be 72 but DM6446 has interrupt number upto 63 only in Data sheet.
I got it. If i use gpio_to_irq, It will return a valid IRQ number but different than the interrupt number(which i guess is also called IRQ number) specified in data sheet of Processor.If I see the /proc/interrupts, it will have an entry of that IRQ returned form gpio_to_irq but under GPIO type not the processor's Interrupt controller, which in my case for ARM shall be AINTC.All other interrupts are of AINTC type.
Moreover, Even if request_irq succeeds with interrupt number stated in data sheet,/proc/stat will report interrupts at both IRQ numbers i.e. AINTC and GPIO type.

Hardware interrupt between Raspberry pi and Atmega 328

I have connected my RPI and atmega328 together in order to control the start of an event on my arduino. In order to do so, GPIO 25 (RPI) is connected directly to pin7 (Arduino PD7). I've got a python script on the RPI witch set the GPIO 25 to high then back to LOW:
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(25, GPIO.OUT)
GPIO.output(25, 1)
#Do some stuff
GPIO.output(25, 0)
The arduino is waiting in a loop for either a physical button to be pressed or the pin7 to be set to HIGH by the RPI:
const int interrupt = 7;
const int button = 13;
const int led = 9;
void setup() {
Serial.begin(9600);
pinMode(interrupt, INPUT);
pinMode(button,INPUT);
pinMode(led, OUTPUT);
digitalWrite(led, LOW);
}
void loop() {
bool on = false;
bool buttonOn = false;
while (!on || !buttonOn) {
on = digitalRead(interrupt);
buttonOn = digitalRead(button);
digitalWrite(led, LOW);
}
digitalWrite(led, HIGH);
delay(1000);
}
Now unfortunately this doesn't work. I have checked the logic level of the atmega328 (https://learn.sparkfun.com/tutorials/logic-levels) and it seems that 3.3V is good enough for HIGH signal.
Am I missing something with the pull up /pull down resistance? I know the PD7 on the atmega is specified as follow:
Port D is an 8-bit bi-directional I/O port with internal pull-up
resistors (selected for each bit). The Port D output buffers have
symmetrical drive characteristics with both high sink and source
capability. As inputs, Port D pins that are externally pulled low will
source current if the pull-up resistors are activated. The Port D pins
are tristated when a reset condition becomes active, even if the clock
is not running.
EDIT:
I have done more testing and I am getting the HIGH or LOW value correctly. It seems that the issue comes from the:
while ((!on) || (!buttonOn)) {
Is there an issue with Arduino and the OR operator in a while loop? Even when one condition is true and the other one is false, it never goes out of the loop.
while ((!on) || (!buttonOn)) {
}
That loop will run as long as one of the variables is false.
Yesterday I was for some reason thinking that you were reading the interrupt pin twice when reading your code.
3.3 v output should be ok to turn the Arduino input high.
You can have a wiring problem or your raspberry pi can be so fast that the arduino misses the pulse.
Change your program on the raspberry pi to leave the output high for so long (e.g. 10) seconds that you can measure it with a multimeter to see that you are setting the right pin.
Does the Arduino now see the input?