Why don't serial ports work properly in Unity? - unity3d

I need help, I'm desperate
During two weeks I have been working in my project, this uses serial port communication (a PIC serial board). I got to set the connection up, but I can not get data from the COM port. I've read some forums and the cause of the problem seems to be the incomplete implementation of System.IO.Ports class.
When I try to get data of the COM port, the event SerialDataReceivedEventHandler (represents the method that will handle the DataReceived event of a SerialPort object.) is not called or activated. I tried to resolve it but I don't find a definitive solution. I thought to prove a external DLL, but a friend told me that the problem will go on, in fact I did it and got the same problem: SerialDataReceivedEventHandler does not work. Also, someone recommended me using a secondary thread, although I don´t understand how to do it at all.
I wrote a program in visual C# and everything works fine. I'm intrigued.
I need to find a solution, some idea or good documentation. If there's someone knows something about it, help me please.
I need to understand the cause of this to continue.

Unity is based on Mono, and Mono doesn't implement completely the Serial class, in particular there are no notifications implemented (such as SerialDataReceivedEvent).
That's why it works in Visual Studio, and not in Unity.
Here are the differences between the Mono and complete .NET implementation of the Serial class :
Extract from http://www.mono-project.com/archived/howtosystemioports/#limitations
"Limitations
At the time of this writing, there are a a few limitations that one must take note:
1) There is no event notification for received serial data. If you want to receive data, one must set a timeout and watch for received data by polling ReadByte() when you think there might be data.
2) One must Read data in byte[] format only – there is no char[] support. You must do your own reading of bytes and translate that into your encoding.
3) DiscardNull, ParityReplace, ReceivedBytesThreshold are not implemented."

I think it happens because the Unity is based on Mono instead of .Net, and a pretty old version of it. You couldn't use Linq on iOS devices for a long time because of AOT bugs, and the localisation implementation is buggy (or at least it was in the previous versions of Unity I tried to work with). I wasn't even able to find the source of System.IO.Ports in the source of Unity's Mono fork, so it's surprising it compiles at all.

Related

Re-program STM32F102 trouble

I am trying to make some custom firmware for a MIDI controller (AKAI LPD8).
There is an STM32F102R8T6 chip in the unit.
I am trying to reach it with a programmer to wipe it, but it seems to not be responsive.
Some information and thing I have tried:
The firmware that came with the unit works, so the chip is not broken
Removed the components connected to the programming pins (PA9-PA10 and PA13-PA14)
I am able to pull BOOT0 high and have it not run the main program, but I am however not able to get a life sign using either an ST-Link2(clone) connected to PA13/14, nor a USB to serial adapter connected to PA9/PA10, so I am not sure what mode it is in
The connection has been checked, and RX-TX etc is the correct way around (but also for the sake of trying it all, I reversed the connections as well...).
Tried both the STM32CubeProgrammer and stm32flash, but none connects.
I am actually not sure if AKAI have locked the chip in such a way that you cannot even do a full chip erase and use the chip for something new? The NRST pin is strangely not doing anything to the running of the firmware either when I try to pull it low.
Is there a way to reprogram these chips when they come off of a commercial product, or are they permanently locked?
Any solution/tips?
Many of the STM32 parts have "proprietary code read-out protection" (google PCROP) which but you might be lucky and they haven't enabled it in the option bytes. Read the documentation for that and the bootloader documentation and get a good idea of what you expect it to do if it is enabled and if it isn't.
If you have a scope, try watching the SWD/JTAG pins to see if there is any response from the device. (If you aren't even sure if it is in reset then scope the crystal if there is one).
If you haven't got a scope, you might be able to to verify what it is doing by seeing if it sets the pins and pull-resistors to how they would be expected to be in the bootloader mode, eg: UART TX should be high if it is enabled, even it it isn't transmitting anything. Put a strong pull-down (~1k) on there and see if it still reads high.
After hours of trying different ways of making it work (also tried the alternate mapping of the UART port), and probed the TX pin as suggested by Tom V to no avail, I have given up working on that specific chip and ordered an upgrade from the STM32F4 family instead to replace it with. A lot more power and useful peripherals.
A bit of a non-answer to the specific question. Frustrating to not have found out what was wrong (chip or approach) but being mindful of the sunk cost fallacy, I think it was best to just replace the chip with a fresh one and start development from there.

Using EEPROM in STM32f10x

I'm using STM32f103 and in my program, I need to save some bytes in the internal flash memory. But as far as I know, I have to erase a whole page to write in it, which will take time.
This delay causes my display to blink.
Can anybody help me to save my data without consuming so much time?
Here is a list that may help:
1- MCU: STM32f103
2- IDE: Keil vision
3- using HAL driver provided by STM32CubeMx
4- sample data for saving in Flash: {0x53, 0xa0, 0x01, 0x54}
In the link below, you can find the code that I'm using.
FLASH_PAGE for Keil
The code you provide doesn't seem to be implemented well. It basically does 2 things each time you initiate a write operation:
Erase the page (this is the part that takes time)
Start form the given pointer, write until it hits a zero.
This is a very ineffective way of using the flash.
Probably the simplest and the most well-known way is to use the method described in ST's AN2594, although it has some limitations.
Still, at some point a page erase will be necessary regardless the method you use and there is no way to avoid some delay, unless your uC supports dual flash banks (STM32F103 don't have this feature). You need to plan the timing of flash writes and display refresh accordingly. If you need periodic writes to the flash, there is probably some high level error in your design.
To solve this problem, I used another library that STM itself presented. I had to include "eeprom.h" into your project and then add "eeprom.c" to it. You can easily find these files on the Internet.

Read from a serial port in Swift 4 using ORSSerialPort

I've been wanting to make an app that sends instructions over serial to my LED controller. For this to work, I need to read what the controller sends back after sending it a command. I found the following function in ORSSerialPort:
func serialPort(_ serialPort: ORSSerialPort, didReceive data: Data) {
// Do things
}
However, is there something like ORSSerialPort.read()?
I don't think ORSSerialPort.read() is a good idea. I know some other serial libraries are written that way, but the only way for that to work is for read() to block (possibly with a timeout) until a byte comes in on the port. Blocking I/O makes it a lot harder to write a good, responsive app, and I want to guide developers using ORSSerialPort away from that approach.
Instead, you should indeed implement serialPort(_:, didReceive:) in your ORSSerialPort delegate. When data is received by the serial port, that method will be called with the received data and you can do whatever you'd like with it.
That said, if your device communicates using a command/response type protocol (ie. every time you send a command, the device sends some response), you ought to look at ORSSerialPort's request/response API. It allows you to explicitly define the format of expected responses to commands, and ORSSerialPort itself will handle asynchronously waiting for, parsing, and validating responses. See the documentation for more info about this part of ORSSerialPort. The library also includes a sample project, RequestResponseDemo, that demonstrates using this API. Both Swift and Objective-C versions are included.
The ORSSerialPort library is popular and generally good. However, I'd found that it didn't work well with TTY serial devices. This was primarily because of its use of IOKit to discover serial ports -- it would only discover physical devices.
This is likely OK in your case but where you want to test your code but don't want to connect to a physical device, it falls over. Good code always needs a testing framwork. So, check out https://github.com/kpishere/POSIXSerialPort for a very simple serial interface API it is just what you need to write and respond to incoming data and also works with physical or virtual devices (as Unix was originally envisoned!).
To your question though, you don't want to call read() directly. You get into understanding whether or not, "is it a blocking read?" Then you get into dealing with threads. Both of the suggested APIs insulate you from that and allow you to think in terms of an event driven model -- this makes for much simpler code.

Best way to pass binary data (YUV Buffer) from plugin to browser

What is the best way to transfer binary data from plugin to browser.
We want to play YUV buffer received from network on browser tab.
currently am converting to base64 and giving via callback. but it is not efficient and am finding below issues
1> CPU and Memory is going up
2> Callback events are not passed when we change the browser tab, later all events are given at one shot on moving back to our tab.
I would also like to know is there any way we can directly draw YUV frame on browser using plugin thread itself.
Thanks in advance.
NPAPI has been removed from most major browsers... the last holdout, Safari, will be removing it as of macOS Mojave. That being the case, don't expect any updates of any kind to the spec -- however you're using it is likely a dying method.
That being the case, on windows there is a method (super hack, really) that you can use to draw directly to the window in the browser from a native message extension, but it's not portable and it depends on internal implementation details. I haven't actually looked into it since I wrote that other answer (linked in this paragraph) so I don't know if it still works or not.
Anyway, if you're on a browser which fully supports NPAPI then you could draw the YUV data directly to the plugin window given to you on the browser; there is an example of blitting image data in FireBreath which you could possibly trace through as an example.
You could also try some variation of listening on a TCP port in the plugin and connecting to it from the browser; you could easily run into some security issues there, but it is the only other method I can think of.
NPAPI simply wasn't ever designed to allow fast transfer of data between the plugin and the browser; I submitted a proposal to add that capability years ago but it was basically too close to the death of NPAPI (which is basically past at this point) for it to go anywhere. The issues you're seeing are 100% consistent with what I would expect, though... and it's still the best way I know.

iPhone Remote IO Issues

I've been playing around with the SDK recently, and I had an idea to just build a personal autotuner (because I am just as awesome as T-Pain).
Intro aside, I wanted to attach a high-quality microphone into the headphone jack, and I wanted my audio to be processed in a callback, and then copied to the output buffer. This has several implications:
When my audio-in is being routed through the built-in microphone, I need to be able to process this input, and send it once my input has stopped (this works).
When my audio-in is being routed through the microphone-in input from the headset jack, I want the output to be sent immediately.
Routing, however, doesn't seem to work properly when using AudioSession modes and overrides, which technically should allow you to reroute output to the iPhone speakers, no matter where the input is coming from. This is documented to work, but in practice, doesn't really work.
Remote IO, however, is not documented at all. Anyone with experience using Remote IO audio units, can you give me a reasonable high-level overview on how to do this properly? I have been using the aurioTouch example code, but I am running into errors where I get error codes like -50 and -10863, none of which are documented.
Thanks in advance.
The aurioTouch example implements remoteIO play through.
You could modify the samples before passing them on.
It simply calls AudioUnitRender in the output render callback.
NB this trick does not seem to work if you port the code
to OSX style CoreAudio. There, 99% of the time, you need
to create two AUHALs (RemoteIO-a-likes) and pass
the samples between them.