I am controlling DC motor with Infineon BTN (mosfet) and STM32F103 MCU, What I am doing is increasing PWM DC from 0 to 100 % in 10 secs, and after keep 100 % of DC. However 100 % DC is not 100% as you can see on scope there is a spike returning to 0 so by the design of TIM timer it is probably 99%. What I want is to switch the TIM output use for PWM into normal I/O to level 1 after 10 secs. I tried simply stop PWM and switchnig I/O but it does not work any help appreciated.
Rgds
Related
My APB1 clock is reported by the STM32 library as being 36MHz.
I used a website to calculate a prescaler value of 3 (4 with the automatic +1), BS1 of CAN_BS1_15tq and BS2 of CAN_BS2_2tq. When I use the values in a quick spreadsheet calculation they come out right for a 500 Kbit/s baud rate.
I used different values, but assuming the same clock speed of 36 MHz to talk at 250 Kbit/s baud rate to NMEA 2000 devices successfully. When I run my code at 250 Kbit/s it works correctly and talks to my test board (which is using the same code) successfully.
I wondered if the TX and RX pin GPIO speed mattered. Here is my configuration for those pins:
gpio_init_data.GPIO_Speed = GPIO_Speed_10MHz;
gpio_init_data.GPIO_Pin = CAN1_RX;
gpio_init_data.GPIO_Mode = GPIO_Mode_IPU;
GPIO_Init(CAN1_PIN_GROUP, &gpio_init_data);
gpio_init_data.GPIO_Pin = CAN1_TX;
gpio_init_data.GPIO_Mode = GPIO_Mode_AF_PP;
GPIO_Init(CAN1_PIN_GROUP, &gpio_init_data);
When I run at 500 KBit/s baud rate I get all transmissions failing and arbitration lost flagged: TSR=41000004. This happens even with the RX and TX pins at GPIO speed 50 MHz.
The CAN transceiver is an ISO1050 which, according to the data sheet, can handle up to 1 Mbit/s.
Does anyone have any idea what I could be doing wrong? Could it be a problem in the circuitry?
As Lundin said, "CAN transceivers need an ideal impedance of 60 ohm to work properly."
The system I am using is a test rig with a board to be tested connected to a test board by about 8cm of CAN bus cable pair. Up to speeds of 250 Kbits this works perfectly well, but not at 500 Kbits.
Adding a 56 ohm resistor (2 x 120 ohm may be better) solves the problem.
Many thanks to Lundin for his patience and excellent information.
Hello,
I'm making a project where I want to bit-bang the JTAG protocol.
According to the AN4666 provided by ST, DMA + GPIO can achieve high speeds in bit-banging synchronous protocols.
I want to:
Generate N PWM pulses (the CLK signal).
With the falling edge of each pulses, I want to set some GPIO with DMA.
With the rising edge, I want to read from the GPIO using DMA.
What is the best way to achieve these specs using HAL?
even withtout dma you can reach quite high freq bit banged i/o i'll say in range 2 - 10MHz assuming fast enougth mcu and gpio bus clock high enough (48 96MHz)
Clock just wan't be as stable and may suffer "stall" say idle time when iterrupt occur vs dma. but is way simpler
for DMA base , if you use 3 bit of one port, one for clk and one for TDI and one for TDO then use 2 dma one to wr and one that rd on same timer source (if possible) at double rate of the TCK signal
the data in is rebuilt by taking teh i bit of one read data over 2
index like 0 2 4 or 1 3 5 ... depending on edge you want and how you wr clk array in mem is coded.
last if your jtag chain is 8 bit multiple SPI is even simpler and dma easy ;)
I want to generate clock for PCA9959 LED driver with my STM32L552. The LED driver needs an external clock at 20 MHz (+/- 15%). I'm trying to generate a 22 MHz clock on port PA8 on STM32L552. I managed to generate a PWM on port PA8, but I can't reach the frequency of ~22Mhz. I arrive at a maximum of 8Mhz.
Here are the PWM parameters:
I'm not sure I filled in the pwm parameters correctly. Normally with his settings I guess I should have a 22 MHz PWM with a 20% duty cycle.
PWM (MHz) = SystemCoreClock (MHz) / Prescaler => 22MhZ = 110MHz / 5
My clock configuration:
Thanks for your help.
The easiest way to output a high speed clock like this is with the MCO peripheral, rather than a timer. Fortunately for you the MCO pin is PA8. Perhaps the person who designed your board knew this and intended you to use MCO. Read the reference manual to see how.
If you do want to use a timer to do 22MHz, then as you have correctly identified you cannot get a 50% duty-cycle on your PWM. I would recommend starting with a 40% or 60%, with an output-compare value of 2-out-of-5 or 3-out-of-5, not 1 as you have above.
There is no detail in the PCA9959 datasheet about what the required mark-space ratio of the clock is, but I guess anything other than 50% could be a problem. You would be better to divide the clock by an even number. Either just divide 110MHz by 6 and output 18.33MHz, or else slow your core down a bit and divide by 4 (reduce the N parameter of your PLL).
Whether you use MCO or PWM don't forget to set the GPIO pin mode to the fastest slew rate available. Maybe the 8MHz you are measuring is the result of aliasing a faster clock that has been through the wrong GPIO mode. You could test this using a scope with at least 100MHz bandwidth.
This is a very basic question. I can't simulate a PWM file, in system time, from its FPGA VI file.
Details
For a NI cRIO-9067 + LabVIEW 2016 + Windows 8 system, under FPGA Interface Mode, I have the Test VI No.1.vi NI LabVIEW file and the corresponding FPGA Desktop Execution Node block file Test VI No.1 DEN.vi as suggested in the Getting Started information [1] [2].
In both files, the Low Pulse and High Pulse Numeric Controls are filled with the 1000 value. The Loop Timer block is set as "mSec" Counter Unit and "32 Bit" Size of Internal Counter.
The compiled FPGA version of the first file executes a square wave changing each 1 second, as expected, after 7 minutes of local compilation.
Under Simulation (Simulated I/O) as Execution Mode, and for reproducing approximatedly and by trial and error the square wave timing every 1 second, I need to put the value 1750 in the Clock Ticks field, from the FPGA 40MHz Onboard Clock reference clock, shown in the block options.
I dont understand this block, and why i should not put any close divisor of 40,000,000 at the Clock Ticks field, or simply, the value 1. Basically i dont understand how to "time" these FPGA simulations.
The desktop execution node is designed for time based simulation you are definately on the right track.
What you are setting at the top is the number of cycles that are executed each time you call the node. In your case you have 1750 ticks so around 43.75us of simulated time per iteration.
To simulate in real time you need to make sure that you execute the same amount of simulated time as the simulation loop takes to run. In your case, you have no timing in your simulation loop so why 1750 works for you is because that is probably how long that loop takes to execute.
If you put a loop timer in of 1ms and set the clock ticks to 40,000 (1ms simulated time) then I think you will find that it also works.
In some cases it may be beneficial to execute faster than real time so you would just have to account for that in your maths. For example if you set the clock ticks to 40 (1us simulated time) then you can count the number of iterations and multiply by 1us to get the actual clock time.
I have a high speed clock at 10 MHz going to the processor's TIM4 input capture pin (ch.3). I would like to verify that the clock is running at 10 MHz with the processor's input capture. I coded the processor with the input capture module, and it works fine for lower frequencies (around 1 kHz or so). Once I start to climb the frequency up to the MHz range, the processor starts to miss interrupts and thus gives me an incorrect frequency. I didn't see anywhere in the datasheet that states the maximum frequency that the input capture can read. I have an external clock of 8 MHz, and a core clock of 72 MHz, so I would imagine that I can read a 10 MHz signal. Any ideas?
Take a look at the TIM_ICInitStructure.TIM_ICPrescaler options. Usually you'll have it set to TIM_ICPSC_DIV1 so that interrupts are generated on every valid transition.
Prescaler values of 1,2,4 and 8 are available that will allow you to effictively reduce the rate of interrupt generation by that factor. For example, for a 10MHz signal with a prescaler of 8 you'd expect to count a frequency of 10Mhz/8 = 1.25MHz.
This is still quite tight for a 72MHz HCLK so you'll still need to optimise your IRQ handler carefully.
Looks like you're generating an interrupt request for every rising (or falling) edge of the clock.
If that is indeed the case, then think about this for a second: with a 10 MHz input signal, you're generating an interrupt about every 7 CPU cycles. In these 7 CPU cycles, you need to budget time to save registers to RAM, run the IRQ handler function prolog, run the actual code you wrote for the interrupt handler, run the IRQ handler function epilogue, and restore the registers.
Best case, if you set compiler flags to optimize for speed and you're not doing much processing in the interrupt handler, you're looking at tens of cycles to run all these tasks. Since you only have 7 cycles to run tens of cycles' worth of processing, it's no surprise that you're missing interrupts.
You can't use an interrupt routine at that frequency. You need to feed the 10MHz in as an external trigger to the timer. Then you can use the prescaler and the timer to divide down to a suitable lower interrupt frequency.