I'm trying to use the lpc1343 as a i2cslave to transmit some data. Writing to the board gives no problems and works exactly as I want it.
However, reading from the board gives problems. It seems I'm not getting any data back although I am sending the right commands. Whenever I try to debug it my board just hangs and I have to reset the driver and my pc to get it running again.
Also, I made a LED go on/off whenever I try to read from it. It only does this once and whenever I try to do it again nothing happens. I think the I2c is stopped then but I have no idea why.
I have found the example code on the website once but now it seems to be gone. Does somebody have an updated I2cslave code?
Which operating system are you writing code for and how can you tell that writing to the i2c chip is successful?
If the write function returns, it could be that the message has been sent but the chip is in a weird configuration that doesn't act on the message received.
Related
I'm facing an unexpected problem with stm32f103c8. I'm programming the chip and after reset, it starts to run the program, but after a few seconds the microcontroller getting mixed up and stops running the program. After that when I try to reprogram the microcontroller, IDE(IAR EWARM) tells "target held in reset state".
It's very unusual issue because sometimes when I connect nRST pin directly to the VCC(3.3V), microcontroller runs program but unfortunately the current goes over 120mA and chip breaks down finally.
I'm using STM32CubeMX to generate the codes and my programmer is STLINK V2(clone), also tried Jlink V8.0(clone) but didn't change the result.
Could it be because of the clone programmers?
Can anyone help me solve this problem?
Thanks
Never connect nRST directly to VDD/VCC. This is a bi-directional input-output which must only ever be connected to an open-drain/open-collector signal. It can be pulled low externally or from within, it must never ever be pulled or driven high other than by the internal pull-up resistor.
When your debugger or programmer has finished programming the flash and wants to start running the new program then it requires to be able to pull this line low, which it might do externally if you connect this line to it in hardware, or else it has to be able to pull it low by software using the internal reset pulse-generator. If it does this and you have tied the line high externally then you are effectively shorting out your power supply, which is the cause of the over-current condition that you have observed.
Maybe the original problem is that your counterfeit ST-Link has its reset output configured as push-pull when it should be open-drain.
I would suggest that the easiest way to proceed is to leave the nRST line unconnected and configure your programming tool to use a software reset only.
I have a weird problem on hand, I never saw it before.
Yet, I'm still trying to pinpoint the problem.
I have an STM32H753VIT and a LAN8742 ethernet controller connected to it.
I run LwIP in NO-SYS mode.
It only works fine after a cold power-up, but not after a hardware reset (button or ST-LINK probe).
It runs a simple TCP echo server. If it runs, I can ping it, and it responds to a TCP client.
But after a hardware reset, I no longer can ping it, and it does not respond as an echo server.
I noticed the green (link) LED on the interface will remain off after the reset.
I could see the LAN8742_Init function executes successful after a hardware reset, but it sees no longer RX data available in the function low_level_input.
On a Nucleo-H743ZI, I run the same code, and this also works after a hardware reset.
Note the code is only slightly different as pin mapping is slightly different.
Code for well working Nucleo-H743ZI:
https://github.com/bkht/Nucleo-H743ZI_LAN8742_LwIP_NO-SYS
Code for strange behaving STM32H753VIT:
https://github.com/bkht/STM32H753VIT_LAN8742_LwIP_NO-SYS
The nRST of the MCU is connected to the nRST of the LAN8742A and a capacitor of 100nF is used to GND. I've a reset switch and I tried a Pull-up resistor, nut no luck.
I have added a reset button, and this found that a longer hardware reset does not work either.
I'm thinking in the direction of timing, or memory contents.
Has anybody ever seen such start-up behavior?
Solved, after the code that performs a software reset of the LAN8942A, I added one line to set the auto-negotiation bit (bit 12) in the BCR (0x00) register.
pObj->IO.WriteReg(pObj->DevAddr, LAN8742_BCR, LAN8742_BCR_AUTONEGO_EN);
I will update the code in github, for those people interested.
I have been having an issue in my project with LWIP. I am using a ST32F4 MCU and running with no OS. The network seems to run fine, the protocols all work, but then (usually a day or two later) the stack just stops running. It seems to happen when trying to make a new connection, but I can't confirm that because I haven't been able to locate what is causing it in the code.
Has anyone else come across this issue? I think it may be the same as this guy.
Do you call any LwIP functions from any interrupt-handlers, like UART etc?
How do you feed packets in/out of LwIP? Directly via interrupt handlers, or do you push them in from your "main-loop" ?
Lock-ups can also be signs of double free, or use-after-free of pbufs.
I also experiences that one project was unstable with wierd random locks-ups when running at the top-rated frequency of the STM32. If I clocked my STM32 at 100MHz instead of 120MHz, all my problems went away....
I am using stm32f4 on discovery board with freertos running on it.
Just started working with stm32 controller and trying to make data transfer using UART. Printf based on HAL_UART_Transmit works perfectly, but receiving data isn't working.
According to numerous tutorials it should be very simple. I create a project in Stm32CubeMX, add all necessary stuff (freertos, USART3, NVIC), enable USART3 global interrupt and generate the code.
I'm trying to add HAL_UART_Receive_IT(&huart3, &rx_char, 1); or something similar in a task and it doesn't do anything. I suppose it flies through it very fast and doesn't wait for characters to be sent from the terminal.
What do I miss here?
I'm doing a small app which has TCP/IP server. I am familiar with BSD sockets and POSIX threads, but I selected CFSocket API. I wanted to do it in non-blocking/async/(very run-loop) scenario. I read a couple of tutorials and than started coding. Everything goes fine. Code for accepting connection works fine. I got 'kCFSocketAcceptCallBack' event. Things are not so good when I start to receive a data. I got BAD_EXC_ACCESS.
Code: http://www.nopaste.pl/18ka
It's my first 'hello world' app. I doesn't know very well X-Code, but it looks like "crash" occurs in internal 'select' function. My guess is CFSocket runs another thread which does 'select' all the time. Can anybody help ?
Whole project here: http://www.speedyshare.com/file/qbXjX/Playground.zip
If you run the app with no debugger, then the iOS will create a crash log which will detail the state of the stack.
You can retrieve the crash logs from the device with Xcode in the "Organizer" window.
EXC_BAD_ACCESS signals typically occur due to bad pointers.