Turning off Psoc control register for more than 3 seconds - psoc

I am a novice at electronics as well as psoc so forgive me here... I have an application that uses a control register with 7 outputs... From what I can understand, when I call I_Control_Reg_Write(0) I turn it off, and if I call I_Control_Reg_Read() first and use the value I read from that and call I_Control_Reg_Write(value) that it will turn this Control Register back on?
To give you more insight on what I am doing.... when program first boots up, it is doing this...
TX_ena_Write(0);
I_Control_Reg_Write(0x02);
uint8_t mytemp = I_Control_Reg_Read();
I_Control_Reg_Write (mytemp & 0x0f);
Then when turning the registers off I am doing this...
g_RegValue = I_Control_Reg_Read();
I_Control_Reg_Write(0);
To turn it on,
I_Control_Reg_Write(g_RegValue);
Which the code chunks above work if I turn off the register for 3 seconds, and turn in on for 1 second... But once I leave it off for more than 3 seconds, I can't seem to turn it back on....

Related

STM32 bootloader failure to erase

I am writing an external bootloader for the STM32F730Z8 - (why? I need one windows code that can run the bootloader for the STM32, or use the STM32 to reprog a connected ATF1508 for my client). I've done this before, using info in AN3155 and AN2606. On lesser CPUs, this has had no difficulty (i.e. STM32L4P5). In this case, I try the same:
1-cycle \RESET & BOOT0 to boot to supervisor mode
2-autobaud successfully
3-send 0x00 to get the list of commands, successfully
4-send 01 to get the version and protection, successfully (vers 49, rp and nt both 0)
5-send 02 to get chip id (0x0452), successfully
6-send 0x73 to write-unprotect flash, successfully (i.e. receive back two ACK)
7-send 0x44 to begin an extended erase (intending only to erase sector 0).
This is where it fails. I get neither ACK nor NACK - it just times out. I don't even get to the second half of the extended-erase command where I send it the sector info. (On the STM32L4P5 it succeeds here easily and goes on to finish erasing, then to write code successfully.)
I've tried very long waits & repeat loops to wait for the ACK (many minutes). From past experience this should be fast, it is only the second stage where I tell it how much flash to erase that takes any significant time.
I've inspected the protection option areas of memory, at 0x1FFF0010, 0018, and they are unprotected, as per factory defaults.
I'm communicating over an FT231XS-R, using the D2XX driver calls. I can mess with the baud rates and such, but that only prevents it from autobauding...and we're doing that fine (9600/8/1/E). I've played with the D2XX SetTimeouts - if set too hasty that only screws up earlier commands. I'm wired to a 20 MHz crystal, and the application runs at 200 MHz, but my understanding is that the bootloader just runs at the internal RC clock rate.
I'm certainly missing something stupid, but I didn't see it in the documentation. Help?
Jeff Casey / Rockfield Research Inc. / Las Vegas, NV
Fixed, disregard.
The fineprint of AN3155 clued me in. On the description of the Write Unprotect command, it says that a system reset will be performed after completion. How did I miss this on the STM32L4P5? I just didn't read it. But why did it work then? In the really fine print next page, in a footnote to the flowchart, it says that they were just foolin'....system reset is only called for some (..list omitted..) and for other STM32 products no system reset is called for.
My earlier success had the following sequence:
reboot-supervisor
autobaud
get
gvrp
gid
wpun
xerase
wpun
write
verify
reboot-user
obviously that doesn't work for the F730. what works is:
reboot-super
autobaud
get
gvrp
gid
wpun
reboot-super
autobaud
get gvrp
gid
xerase
reboot-super
autobaud
get
gvrp
gid
write
verify
reboot-user
(obviously I can skip a few of the repeated steps, like get-id, but basically it needed a reboot and re-autobaud.)
note that i had to reboot-super a 3rd time...this was because the write attempt timed out after the xerase unless i went through the whole sequence again. funny, though, the spec doesn't say anything about resetting after an erase. i cross posted this question on the STM32 community site, and I'll do the same with this answer and ping them on this.
Thanks for reading, cheers. Jeff

Cannot exit sleep mode of bxCAN on STM32F429IGT in loopback mode

In short, SLAK bit won't reset when SLEEP bit is manually reset. In details :
I am trying to achieve a successful transmission in loopback mode before venturing into making a network. I had it working at a point after a lot of documentation reading, but now I have a new issue. (Sadly I do not remember what I changed, played with the timings maybe)
After setting the peripheral to loopback and providing coherent bit timing values (so I may have played with them but they are back to being ok), I generate the code with Cube. This implies that the flow should first exit the sleep mode, enter the init mode, do the settings, exit the init mode, and start normal mode. According to the reference manual :
If software requests entry to initialization mode by setting the INRQ bit while bxCAN is in
Sleep mode, it must also clear the SLEEP bit. [...] After the SLEEP bit has been cleared, Sleep mode is exited once bxCAN has synchronized
with the CAN bus [...]. The Sleep mode is exited
once the SLAK bit has been cleared by hardware
and
To synchronize, bxCAN waits until the CAN bus is idle, this means 11
consecutive recessive bits have been monitored on CANRX.
According to wiki
A 0 data bit encodes a dominant state, while a 1 data bit encodes a recessive state
So
Checking the code generated by Cube this is exactly what is happening. I pasted here the essential part from stm32f4xx_hal_can.c :
HAL_StatusTypeDef HAL_CAN_Init(CAN_HandleTypeDef *hcan)
{
[...]
/* Exit from sleep mode */
CLEAR_BIT(hcan->Instance->MCR, CAN_MCR_SLEEP);
/* Get tick */
tickstart = HAL_GetTick();
/* Check Sleep mode leave acknowledge */
while ((hcan->Instance->MSR & CAN_MSR_SLAK) != 0U)
{
if ((HAL_GetTick() - tickstart) > CAN_TIMEOUT_VALUE)
{
[...]
/*Error*/
}
}
/* Request initialisation */
SET_BIT(hcan->Instance->MCR, CAN_MCR_INRQ);
/* Get tick */
tickstart = HAL_GetTick();
/* Wait initialisation acknowledge */
while ((hcan->Instance->MSR & CAN_MSR_INAK) == 0U)
{
if ((HAL_GetTick() - tickstart) > CAN_TIMEOUT_VALUE)
{
[...]
/*Error*/
}
The SLEEP bit of CAN_MSR is reset and waits for the SLAK bit from CAN_MSR to be reset by the hardware. CAN_TIMEOUT_VALUE is set to 10, basically giving time for the 11 recessive bits to settle in.
And this is where I am stuck. SLACK would not reset... I tried to remove if ((HAL_GetTick() - tickstart) > CAN_TIMEOUT_VALUE) so that the MCU waits indefinitely for a SLAK reset. Did not help.
Looking at the CAN_MSR RX register, giving the current value on RX, while waiting for SLACK to change, I noticed that it is always at 0. So I tried to set GPIOs as pull-up and pull-downs for RX and TX, but I think it has no effect since, in loopback mode, RX of bxCAN is isolated from GPIOs :) This meaning also, that the issue should not be on the hardware side (like wiring and stuff, external things, not internal hardware). Leading me to believe that something is wrong during the global HAL_Init() or MX_GPIO_Init() or other stuff, but since it is generated by Cube and I did not change anything, I don't see how it could have an effect on SLAK not going away.
My idea was maybe to do a software reset, on something, but I don't know where this path will lead me since powering off and on the chip do not resolve the issue...

Any way to block while reading an XBox 360 controller other than HID API?

I'm trying to read from my XBox 360 controller without polling it. (To be precise, I'm actually using a Logitech F310, but my Windows 10 PC sees it as an XBox 360 controller.) I've written some rather nasty HID code that uses overlapping I/O to block in a thread on two events, one that indicates there is a report ready to read from the HID device, the other indicating the UI thread has requested the HID thread to exit. That works fine, but the HID driver behaves somewhat differently than XInput does. In particular, it consolidates the two triggers into a single value, only passing their difference (on the curious claim that games expect HID values to be 0x80 when the player's finger is off the control). XInput treats them as two distinct values, which is a big improvement. Also, XInput reports the hat switches as four bits, which means you can actually get ten states out of it: unpressed, N, NE, E, SE, S, SW, W, NW, and all-down (that last might be hard to use successfully, but at least it's there if you want it; I've been using it to exit my polling loop).
The downside, to me, of XInput is that there appears to be no way to block on a read request until the controller changes one of its values or buttons. As an HID device, the ReadFile call will block (more exactly, WaitForMultipleEvents blocks until there is data available). XInput seems to anticipate polling. For a game that would naturally be written to poll the controller as often as it updated the game state (maybe once for each new video frame displayed, for example), that makes sense. But if you want to use the controller for some other purpose (I'm working on a theatrical application), you might want a purely asynchronous system like the HID API supplies. But, again, the HID API combines the two value triggers.
Now, when you read the device with XInput, not only do you get the state of all the controls, you also get a packet number. MSDN says the packet number only changes when the state of a control changes. That way, if consecutive packet numbers are the same, you don't have to bother with any processing after the first one, because you know the controller state hasn't changed. But you are still polling which, to me, is somewhat vulgar.
What intrigues me, however, is that when I put a big delay in between my polls (100ms) I can see that the packet numbers go up by more than one when the value controls (the triggers or sticks) are being moved. This, I think, suggests that the device is sending packets without waiting to be polled, and that I am only getting the most recent packet each time I poll. If that is the case, it seems that I ought to be able to block until a packet is sent, and react only when that happens, rather than having to poll at all. But I can't find any indication that this is an option. Because I can block with the HID API, I don't want to give up without trying (including asking for advice here).
Short of writing my own driver for the controller (which I'm not sure is even an option without proprietary documentation), does anyone know how I can use overlapping I/O (or any other blocking method) to read the XBox 360 controller the way XInput does, with the triggers as separate values, and the hat as four buttons?
Below is some code I wrote that reads the controller and shows that the packet numbers can jump by more than one between reads:
#include <Windows.h>
#include <Xinput.h>
#include <stdio.h>
#define MAX_CONTROLLERS 4
int main()
{
DWORD userIndex;
XINPUT_STATE xs;
XINPUT_VIBRATION v;
XInputEnable(TRUE);
// Which one are we?
for (userIndex = 0; userIndex < XUSER_MAX_COUNT; ++userIndex)
if (XInputGetState(userIndex, &xs) == ERROR_SUCCESS)
break;
if (userIndex == XUSER_MAX_COUNT)
{
printf("Couldn't find an Xbox 360 controller.\n");
getchar();
return -1;
}
printf("Using controller #%1d.\n", userIndex);
while (TRUE)
{
DWORD res = XInputGetState(userIndex, &xs);
printf("%5d %6d: %3d %3d %3d %3d %3d %3d 0x%04X\n",
res,
xs.dwPacketNumber,
xs.Gamepad.bLeftTrigger & 0xFF,
xs.Gamepad.bRightTrigger & 0xFF,
xs.Gamepad.sThumbLX & 0xFF,
xs.Gamepad.sThumbLY & 0xFF,
xs.Gamepad.sThumbRX & 0xFF,
xs.Gamepad.sThumbRY & 0xFF,
xs.Gamepad.wButtons);
if (xs.Gamepad.wButtons == 0x000F) // mash down the hat
break;
Sleep(100);
}
getchar();
return 0;
}
Please note that DirectInput isn't much help, as it also combines the triggers into one value.
Thanks!
Not sure there is any advantage to this, but could you write a thread that polls on a regular interval and then sets a semaphore (or some other signal) when the state has changed. Then your main thread could block waiting for the signal from the polling thread. But potentially there might not be any advantage to this system because on some controllers the values of the thumbsticks change slightly ever frame whether you move them or not. (Noise) You could of course ignore small changes and only signal your semaphore when a large change occurred.

Raspberry Pi / GPIO.RISING triggers callback on .BOTH

I have a Raspberry Pi running Raspbian via NOOBS. I have a button wired to pins 1 and 11. I'm attempting to use GPIO's .add_event_detect and RPIO.RISING to call a function upon the button press. (The callback turns on an led for 2 seconds, and then turns it off.)
I'm finding that the RPIO.RISING function is calling the callback on both the button press (pin 11 goes from 0 to 1) AND the button release (pin 11 goes from 1 to 0). The light is being turned on twice, exactly as it would if I were using RPIO.BOTH.
I don't think that this is a hysteresis / noisy signal issue, because I can depress the button for many seconds, and then let go and see the callback called again.
Here is the example code:
import RPi.GPIO as GPIO ## Import GPIO library
import time
#configure all of the inputs / outputs properly
def config():
#initalize the GPIO pin numbering
GPIO.setmode(GPIO.BOARD) ## Use board pin numbering
#setup output pins
GPIO.setup(8, GPIO.OUT) ## Setup GPIO Pin 7 to OUT
GPIO.setup(10,GPIO.OUT)
GPIO.setup(12,GPIO.OUT)
#initialize the inputs for the button
GPIO.setup(11, GPIO.IN)
#create the button-watching function
GPIO.add_event_detect(11, GPIO.RISING, callback=execute_lights, bouncetime=800)
#the light-turning-on function. One press turns yellow. Second press turns green, then off.
def execute_lights(channel):
print "executing lights: "
#Turn on the light we want
GPIO.output(8,True)
#turn green off after 2 seconds
time.sleep(2)
GPIO.output(8,False)
Is there a software workaround that I can use to address this issue?
For whatever reason the implementation of bounctime is very strange.
If you hold and release your button WITHIN your set bounce time of 800ms, it should work OK. If you hold it longer, then you will get triggering on the release, sometimes. I had the same issue, thinking 'bounctime' was the time that the 'system' ignored all other inputs...like the 'settling' time for a switch. It's not. So as long as u press and release your button WITHIN your set bouncetime, you should fine it works OK.
Nick

Improve Arduino WiFly latency using protol

I have an Arduino with a WiFly shield, everything works perfectly!
The thing is, when I want to turn on an LED, I open in my
webbrowser:
192.168.1.120/ledon/
(I made a program which handles this URL).
But the thing is; when I make a request, I must wait 1-2 seconds before I can do another one.
So, it is very long, and if I want to control motors, it is just too long.
So, instead of using an HTTP request, I want to use something else which can be faster.
Something "super fast".
I just need to tell the Arduino:
- go direction 1
- go direction 2...
- turn on LED
- turn off LED
- tell me the light level (which return a int)
So it is just about a small amount of data.
Can you show me a way? (Telnet, UDP, OSC?)
For your arduino, have a look at just using sockets or even encoding the data in the URL requested.
You shouldnt get less than about 0.8 Seconds Lag maximum.
How big is your program for handling the Url /ledon/ ?
Using pure packets (usually TCP) from your computer to the arduino is faster sometimes..
But you may need to code a application to handle the packets on the pc.
There is the option of Javascript to parse data back and forth e.g. reading the light level and such.