Any way to block while reading an XBox 360 controller other than HID API? - hid

I'm trying to read from my XBox 360 controller without polling it. (To be precise, I'm actually using a Logitech F310, but my Windows 10 PC sees it as an XBox 360 controller.) I've written some rather nasty HID code that uses overlapping I/O to block in a thread on two events, one that indicates there is a report ready to read from the HID device, the other indicating the UI thread has requested the HID thread to exit. That works fine, but the HID driver behaves somewhat differently than XInput does. In particular, it consolidates the two triggers into a single value, only passing their difference (on the curious claim that games expect HID values to be 0x80 when the player's finger is off the control). XInput treats them as two distinct values, which is a big improvement. Also, XInput reports the hat switches as four bits, which means you can actually get ten states out of it: unpressed, N, NE, E, SE, S, SW, W, NW, and all-down (that last might be hard to use successfully, but at least it's there if you want it; I've been using it to exit my polling loop).
The downside, to me, of XInput is that there appears to be no way to block on a read request until the controller changes one of its values or buttons. As an HID device, the ReadFile call will block (more exactly, WaitForMultipleEvents blocks until there is data available). XInput seems to anticipate polling. For a game that would naturally be written to poll the controller as often as it updated the game state (maybe once for each new video frame displayed, for example), that makes sense. But if you want to use the controller for some other purpose (I'm working on a theatrical application), you might want a purely asynchronous system like the HID API supplies. But, again, the HID API combines the two value triggers.
Now, when you read the device with XInput, not only do you get the state of all the controls, you also get a packet number. MSDN says the packet number only changes when the state of a control changes. That way, if consecutive packet numbers are the same, you don't have to bother with any processing after the first one, because you know the controller state hasn't changed. But you are still polling which, to me, is somewhat vulgar.
What intrigues me, however, is that when I put a big delay in between my polls (100ms) I can see that the packet numbers go up by more than one when the value controls (the triggers or sticks) are being moved. This, I think, suggests that the device is sending packets without waiting to be polled, and that I am only getting the most recent packet each time I poll. If that is the case, it seems that I ought to be able to block until a packet is sent, and react only when that happens, rather than having to poll at all. But I can't find any indication that this is an option. Because I can block with the HID API, I don't want to give up without trying (including asking for advice here).
Short of writing my own driver for the controller (which I'm not sure is even an option without proprietary documentation), does anyone know how I can use overlapping I/O (or any other blocking method) to read the XBox 360 controller the way XInput does, with the triggers as separate values, and the hat as four buttons?
Below is some code I wrote that reads the controller and shows that the packet numbers can jump by more than one between reads:
#include <Windows.h>
#include <Xinput.h>
#include <stdio.h>
#define MAX_CONTROLLERS 4
int main()
{
DWORD userIndex;
XINPUT_STATE xs;
XINPUT_VIBRATION v;
XInputEnable(TRUE);
// Which one are we?
for (userIndex = 0; userIndex < XUSER_MAX_COUNT; ++userIndex)
if (XInputGetState(userIndex, &xs) == ERROR_SUCCESS)
break;
if (userIndex == XUSER_MAX_COUNT)
{
printf("Couldn't find an Xbox 360 controller.\n");
getchar();
return -1;
}
printf("Using controller #%1d.\n", userIndex);
while (TRUE)
{
DWORD res = XInputGetState(userIndex, &xs);
printf("%5d %6d: %3d %3d %3d %3d %3d %3d 0x%04X\n",
res,
xs.dwPacketNumber,
xs.Gamepad.bLeftTrigger & 0xFF,
xs.Gamepad.bRightTrigger & 0xFF,
xs.Gamepad.sThumbLX & 0xFF,
xs.Gamepad.sThumbLY & 0xFF,
xs.Gamepad.sThumbRX & 0xFF,
xs.Gamepad.sThumbRY & 0xFF,
xs.Gamepad.wButtons);
if (xs.Gamepad.wButtons == 0x000F) // mash down the hat
break;
Sleep(100);
}
getchar();
return 0;
}
Please note that DirectInput isn't much help, as it also combines the triggers into one value.
Thanks!

Not sure there is any advantage to this, but could you write a thread that polls on a regular interval and then sets a semaphore (or some other signal) when the state has changed. Then your main thread could block waiting for the signal from the polling thread. But potentially there might not be any advantage to this system because on some controllers the values of the thumbsticks change slightly ever frame whether you move them or not. (Noise) You could of course ignore small changes and only signal your semaphore when a large change occurred.

Related

Syncing of buffer-transmission with ESP32, I2S MEMS-mic and SD-card (FreeRTOS, PlatformIO, ESP-PROG)

i know this forum dislikes "open" questions like this, nevertheless i'd like somebody to help untie the knot in my head, much appreciated.
The goal is simple:
read a stereo 32bit 44100 S/s I2S signal from 2 adafruit sph0645 mics
create a wav-header and store the data onto an SD-card
I've been at this for a few days now and i know that this will be much more complicated than i originally thought. Main reason: signal quality. Like most tutorials on this subject the simplest "hello world" for these mics is a looped polling for I2S-samples. Poll, fill buffer, output via serial or write to SD-card. This returns a choppy, noisy, sped up version of RL-audio. The filling of the internal DMA-buffers can be seen as constant, but the rest is mostly chaos, so
how to i sync these DMA-buffers with the rest of my code?
From experience with the STM32 HAL i'd imagine some register which can be set to throw an interrupt whenever a buffer is full, or an event which can be sent between tasks via queues. Examples on this subject either poll in a main loop with mono an abysmal sample-rate and bit depth or use pages of overkill code and never adress what it does, "just copy and it works", not good. Does the ESP32-Arduino framework provide some way to to this properly? The espressif-documentation isn't something to look forward to, since some of their I2S interface functions don't even work (if you are researching this topic as well, you too might have noticed that i2s_read only returns zeros). Just a hint into the right direction would help, i'm writing my own code anyway. Interrupts? Events? Timers? Polling for full buffers? Only you might know.
have a good one, thx
Thanks to https://github.com/atomic14/ i now have an answer for a syncing-method which works very well. This method has been tried by https://esp32.com/viewtopic.php?t=12546 who also didn't fully understand what was going on: the espressif i2s-interface offers a flag stored in an event which is triggererd every time one of the specified dma-buffers has received a full set of data, ergo, is full. It looks like this:
while(<your condition>){
i2s_event_t evt;
if (xQueueReceive(<your queue>, &evt, portMAX_DELAY) == pdPASS){
if (evt.type == I2S_EVENT_RX_DONE){
size_t bytesRead = 0;
do{
//read data via i2s_read or i2s_read_bytes
} while (bytesRead > 0);
No data is stored in this queue, but rather a flag which can then be used to synchronize dma-filling and further buffering/calculating/sending the read data.
HOWEVER this only works if you install the i2s driver in a specific setup. Instead of using
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
in your setup, you can activate the "affinity" for events by passing a queue-handle and a lenght:
i2s_driver_install(I2S_NUM_0, &i2s_config, 4, &<your queue>);
hope this helps getting started, it sure did help me.

What is the meaning of CANBUS function mode initilazing settings for STM32?

I want to understand meaning of the following function mode definition, there is explanation in the library. But I don't understand that because explanations are very short and not enough. I searched on the net I couldnt find any information about.
CAN_InitStructure.CAN_TTCM = DISABLE;
CAN_InitStructure.CAN_ABOM = DISABLE;
CAN_InitStructure.CAN_AWUM = DISABLE;
CAN_InitStructure.CAN_NART = ENABLE;
CAN_InitStructure.CAN_RFLM = DISABLE;
CAN_InitStructure.CAN_TXFP = ENABLE;
These are the names of the bits located in the CAN master control register (CAN_MCR). So, the proper source for their meaning is the reference manual. My following answer will be somewhat copy & paste from the reference manual, but I will try to explain these bits in detail.
TTCM (Time triggered communication mode): This bit activates the Time Triggered Communication (TTCAN) mode, which is an extension to the CAN standard. I don't know much about TTCAN, but as I understand, it assigns time windows to messages to satisfy some real-time requirements. So, normally this bit should remain 0.
ABOM (Automatic bus-off management): If the transmit error counter (TEC) becomes greater than 255, the CAN hardware switches to bus-off state. To recover, it must wait for the recovery sequence, 128 occurrences of 11 consecutive recessive bits. Only after that, the CAN hardware may return to the normal operating state. This bit controls the returning behavior. If it's 1, returning to normal state is automatic. Otherwise, software should make the request, provided that the recovery sequence has been observed.
AWUM (Automatic wakeup mode): The CAN module can be in one of 3 modes: Initialization mode, normal mode or sleep (low power) mode. Sleep mode is requested by the software. However, you have 2 options to exit sleep mode. If this bit is 0, then you have to exit sleep mode manually. You may enable CAN wakeup interrupt to inform you about bus activity, then exit the sleep mode in ISR. But if this bit is 1, the hardware returns to normal mode automatically when it detects bus activity.
NART (No automatic retransmission): Normally, CAN hardware retries to transmit a message if its previous attempts fail, because of arbitration lost etc. But if you make this bit 1, the transmitter does not retry. This is required when you use Time Triggered Communication (TTCAN). Otherwise, you should keep this bit 0.
RFLM (Receive FIFO locked mode): Your receive mailboxes have 3 levels depth, meaning that they can store maximum 3 messages before they are overrun. This bit controls what happens in case of mailbox overrun. Default behavior is to keep the oldest 2 messages and the newest one. For example, if you received 5 messages, the buffer keeps the messages 1, 2 & 5. However, if you make this bit 1, the mailbox keeps the messages 1, 2 & 3 and discards the new arrivals.
TXFP (Transmit FIFO priority): You have 3 transmit mailboxes. When you fill more than one, the hardware must decide which one to transmit first. Normally, one can assume that a message with a lower ID number is more important and should be transmitted first. But if you want to transfer them in a first-comes-first-served fashion for some reason, you need to make this bit 1. Of course, this is just a local priority. On the physical bus, the messages with lower ID always have priority.

Triggering Interrupt for any byte received

I'm trying to get a code to work that triggers an interrupt for a variable data size coming to a RX input of a STM32 board (not discovery) in DMA Circular mode. ex.:CONNECTED\r\nDATAREQUEST\r\n
So far so good, I'm being able to receive data and all, while also triggering the DMA interrupt.
I will then create a sub RX message processing buffer breaking down each \r\n to a different char array pointer.
msgProcessingBuffer[0] = "COM_OK"
msgProcessingBuffer[1] = "DATAREQUEST"
msgProcessingBuffer[n] = "BlahBlahBlah"
My problem comes actually from the trigger of the interrupt. I would like to trigger the interrupt from any amount of data and processing any data received.
If I use the interrupt request bellow:
HAL_UART_Receive_DMA(&huart1,uart1RxMsgBuffer, 30);
The input buffer will take 30 bytes to trigger the interrupt, but that's too much time to wait because I would like to process the RX data as soon as a \r\n is found in the string. So I cannot wait for the full buffer to fill to begin processing it.
If I use the interrupt request bellow:
HAL_UART_Receive_DMA(&huart1,uart1RxMsgBuffer, 1;
It will trigger as I want, but there is no point on using DMA in this case because it will trigger the interrupt for every byte and will create a buffer of just 1 byte (duh) just like in "polling mode".
So my question is, how do I trigger the DMA for the first byte received but still receive/process all data that might come after it in a single interrupt? I believe I might be missing some basic concept here.
Best regards,
Blukrr
In short: HAL/SPL libraries don't provide such feachures.
Generally some MCUs, for example STM32F091VCT6 have hardware supporting of Modbus and byte flow analysis (interrupt by recieve some control byte) - so if you will use such MCU in you project, you can configure receive by circular DMA with interrupts by receive '\r' or '\n' byte.
And I repeat: HAL or SPL don't support this features, you can use it only throught work with registers (see reference manuals).
I was taking a look at some other forums and I've found there a work around for this problem.
I'm using a DMA in circular mode and then I monitor the NDTR which updates its value every time a byte is received through the UART interface. Then I cyclically call a function (in while 1 loop or in a cyclic interrupt handler) that break down each message part always looking for /n /r chars. This function also saves the current NDTR value for comparison if it has changed since the last "while 1" cycle. If the NDTR has changed since last cycle I wait a couple milliseconds to receive the remaining message (UART it's too slow to transmit) and then save those received messages in a char buffer array for post processing.
If you create a circular DMA buffer of about 50 bytes (HAL_UART_Receive_DMA(&huart1,uart1RxMsgBuffer, 50)) I think it's enough to compensate any fluctuations in the program cycle.
In the mean time I opened a ticket to ST and they confirmed what you just said they also added:
SOLUTION PROPOSED BY SUPPORTER - 14/4/2016 16:45:22 :
Hi Gilberto,
The DMA interrupt requests available are listed on Table 50 of the Reference Manual, RM0090, http://www.st.com/web/en/resource/technical/document/reference_manual/DM00031020.pdf. Therefore, basically, the DMA interrupt can only trigger at the end of one of these events.
• Half-transfer reached
• Transfer complete
• Transfer error
• Fifo error (overrun, underrun or FIFO level error)
• Direct mode error
Getting a DMA interrupt to trigger upon reception of a specific character in your receive data stream is not possible. You may want to trigger the interrupt when you receive packets of say 30 bytes each and then process the datastring to check if your \r\n chars have arrived so you can process the data block.
Regards,
MCU Tech Support

Very few write cycles in stm32f4

I'm using a STM32F401VCT6U "discovery" board, and I need to provide a way for the user to write addresses in memory at runtime.
I wrote what can be simplified to the following function:
uint8_t Write(uint32_t address, uint8_t* values, uint8_t count)
{
uint8_t index;
for (index = 0; index < count; ++index) {
if (IS_FLASH_ADDRESS(address+index)) {
/* flash write */
FLASH_Unlock();
if (FLASH_ProgramByte(address+index, values[index]) != FLASH_COMPLETE) {
return FLASH_ERROR;
}
FLASH_Lock();
} else {
/* ram write */
((uint8_t*)address)[index] = values[index]
}
}
return NO_ERROR;
}
In the above, address is the base address, values is a buffer of size at least count which contains the bytes to write to memory and count the number of bytes to write.
Now, my problem is the following: when the above function is called with a base address in flash and count=100, it works normally the first few times, writing the passed values buffer to flash. After those first few calls however, I cannot write just any value anymore: I can only reset bits in the values in flash, eg an attempt to write 0xFF to 0x7F will leave 0x7F in the flash, while writing 0xFE to 0x7F will leave 0x7E, and 0x00 to any value will be successful (but no other value will be writable to the address afterwards).
I can still write normally to other addresses in the flash by changing the base address, but again only a few times (two or three calls with count=100).
This behaviour suggests that the maximum write count of the flash has been reached, but I cannot imagine it can be so fast. I'd expect at the very least 10,000 writes before exhaustion.
So what am I doing wrong?
You have missunderstood how flash works - it is not for example as straight forward as writing EEPROM. The behaviour you are discribing is normal for flash.
To repeatidly write the same address of flash the whole sector must be first erased using FLASH_EraseSector. Generally any data that needs to preserved during this erase needs to be either buffered in RAM or in another flash sector.
If you are repeatidly writing a small block of data and are worried about flash burnout do to many erase write cycles you would want to write an interface to the flash where each write you move your data along the flash sector to unwriten flash, keeping track of its current offset from the start of sector. Only then when you run out of bytes in the sector would you need to erase and start again at start of sector.
ST's "right way" is detailed in AN3969: EEPROM emulation in STM32F40x/STM32F41x microcontrollers
This is more or less the process:
Reserve two Flash pages
Write the latest data to the next available location along with its 'EEPROM address'
When you run out of room on the first page, write all of the latest values to the second page and erase the first
Begin writing values where you left off on page 2
When you run out of room on page 2, repeat on page 1
This is insane, but I didn't come up with it.
I have a working and tested solution, but it is rather different from #Ricibob's answer, so I decided to make this an answer.
Since my user can write anywhere in select flash sector, my application cannot handle the responsability of erasing the sector when needed while buffering to RAM only the data that need to be preserved.
As a result, I transferred to my user the responsability of erasing the sector when a write to it doesn't work (this way, the user remains free to use another address in the sector to avoid too many write-erase cycles).
Solution
Basically, I expose a write(uint32_t startAddress, uint8_t count, uint8_t* values) function that has a WRITE_SUCCESSFUL return code and a CANNOT_WRITE_FLASH in case of failure.
I also provide my user with a getSector(uint32_t address) function that returns the id, start address and end address of the sector corresponding to the address passed as a parameter. This way, the user knows what range of address is affected by the erase operation.
Lastly, I expose an eraseSector(uint8_t sectorID) function that erase the flash sector whose id has been passed as a parameter.
Erase Policy
The policy for a failed write is different from #Ricibob's suggestion of "erase if the value in flash is different of FF", as it is documented in the Flash programming manual that a write will succeed as long as it is only bitreset (which matches the behavior I observed in the question):
Note: Successive write operations are possible without the need of an erase operation when
changing bits from ‘1’ to ‘0’.
Writing ‘1’ requires a Flash memory erase operation.
If an erase and a program operation are requested simultaneously, the erase operation is
performed first.
So I use the macro CAN_WRITE(a,b), where a is the original value in flash and b the desired value. The macro is defined as:
!(~a & b)
which works because:
the logical not (!) will transform 0 to true and everything else to false, so ~a & b must equal 0 for the macro to be true;
any bit at 1 in a is at 0 in ~a, so it will be 0 whatever its value in b is (you can transform a 1 in 1 or 0);
if a bit is 0 in a, then it is 1 in ~a, if b equals 1 then ~a & b != 0 and we cannot write, if bequals 0 it's OK (you can transform a 0 to 0 only, not to 1).
List of flash sector in STM32F4
Lastly and for future reference (as it is not that easy to find), the list of sectors of flash in STM32 can be found on page 7 of the Flash programming manual.

Gstreamer 1.0 Pause signal

I need to detect when the current playing audio/video is paused. I cannot find anything for 1.0. My app is a bit complex but here is condensed code
/* This function is called when the pipeline changes states. We use it to
* keep track of the current state. */
static void state_changed_cb(GstBus *bus, GstMessage *msg, CustomData *data)
{
GstState old_state, new_state, pending_state;
gst_message_parse_state_changed(msg, &old_state, &new_state, &pending_state);
if(GST_MESSAGE_SRC(msg) == GST_OBJECT(data->playbin))
{
g_print("State set to %s\n", gst_element_state_get_name(new_state));
}
}
gst_init(&wxTheApp->argc, &argv);
m_playbin = gst_element_factory_make("playbin", "playbin");
if(!m_playbin)
{
g_printerr("Not all elements could be created.\n");
exit(1);
}
CustomData* data = new CustomData(xid, m_playbin);
GstBus *bus = gst_element_get_bus(m_playbin);
gst_bus_set_sync_handler(bus, (GstBusSyncHandler) create_window, data, NULL);//here I do video overly stuffs
g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, &data);
What do I do wrong? I cannot find working example on connecting such events on Gstreamer 1.0 and 0.x seems a bit different than 1.0 so the vast exaples there don't help
UPDATE
I have found a way to get signals. I run wxWidgets timer with 500ms time span and each time timer fires I call
GstMessage* msg = gst_bus_pop(m_bus);
if(msg!=NULL)
{
g_print ("New Message -- %s\n", gst_message_type_get_name(msg->type));
}
Now I get a lot of 'state-change' messages. Still I want to know if that message is for Pause or Stop or Play or End of Media (I mean way to differentiate which message is this) so that I can notify the UI.
So while I get signals now, the basic problem, to get specific signals, remains unsolved.
You have to call gst_bus_add_signal_watch() (like in 0.10) to enable emission of the signals. Without that you can only use the other ways to get notified about GstMessages on that bus.
Also just to be sure, you need a running GLib main loop on the default main context for this to work. Otherwise you need to do things a bit different.
For the updated question:
Check the documentation: gst_message_parse_state_changed() can be used to parse the old, new and pending state from the message. This is also still the same as in 0.10. From the application point of view, and conceptionally nothing much has changed really between 0.10 and 1.0
Also you shouldn't do this timeout-waiting as it will block your wxwidget main loop. Easiest solution would be to use a sync bus handler (which you already have) and dispatch all messages from there to some callback on the wxwidget main loop.