I want to code OTA function in STM32F4 using HAL library.
I already have a LTE module download upgrade binary file and forward to STM32F4 through UART.
The problem is that I know I can receive binary file to RAM by using HAL_UART_Receive_DMA() and I know I can use HAL_FLASH_Program() to write data from RAM to internal flash. I do not really want to implement in this way since the RAM of STM32F4 is small.
Is there any way can write directly from UART to internal flash?
Thanks
Related
So I have been trying for (embarassingly) long to efficiently stream video from my pi to my computer over the internet. Video encoding and streaming still looks fuzzy to me, since most examples hide the things that happen behind the scenes.
Things I've tried:
OpenCV capture, encode as jpeg, send through socket and decode - I get a working video feed, but data usage is crazy.
using the Picamera module, capture and send over socket, receive and pipe into VLC - also worked, data usage is low, but latency is even more than the first approach. Also I want to play the video on my own UI on pyQt
using the Picamera module, capture and send over socket, pipe to FFMPEG, and try to use opencv to capture from the stdout - I got "bad argument" errors on the cv2.VideoCapture(stdout) method.
All I want is an efficient way to transmit video with low bandwidth and latency using h264, and no pre-written protocols like rtmp or installing servers, then blast it onto my pyQt UI. Something like:
bytes =
video.capture().encode('h264')
udp_socket.sendto(bytes, (ip, port))
And on the receiving side:
data = udp_socket.recvfrom(65534) frame = data.decode() update_frame()
The closest thing I got to this was with the first approach.
Any help would be greatly appreciated.
I have camera (camera + videoprocessor TW8834) sending video data over bt656 protocol, but the camera interface have not i2c on cable, so, it isn't recognizing by linux video driver.Is there some way to capture video from my camera in linux without modifieng the video driver?
I got advice to make a fake probe in the existing video driver (sun6i), but i don't think that it is the optimal way. May be i can write some driver or something simmilar that will snap up i2c messaging with my camera? Can i use it with default video driver by this way?
May be, are there some other way?
What should i learn to solve my problem?
I solve my problem by driver and DeviceTree modification. OS Armbian, easy to build, easy to make and apply patches. Big community. Some course about kernel hacking, which was usefull for me: https://courses.linuxchix.org/kernel-hacking-2002.html
For create solution we with my collegue spend about 4 days, we both was non-experienced in kernel modification earlier. I hope, this information will help someone with similar problem.
I'm somewhat beginner in gstreamer.
I am trying to stream live video from raspberry pi to MSP430 MCU over SPI connection.
What I am thinking right now is get the buffers directly from raspberry pi using appsink, and then send them through SPI connection.
But I am not sure where the buffers can be saved, and how they are saved.
I'm looking for examples using appsink, but not really sure whether I can continuously get a stream of buffers.
Is there any way that I can do that?
Or any other better way to stream a video over SPI connection can be helpful.
Thank you.
I'm interested if there any web server application that allows you to stream HD quality live chat without software application installed on streaming client PC. Flash Media Server allows HD streaming but it must be encoded prior to that which makes impossible to work with as any streaming client will need to install and handle their encoding software.
Thanks
I'm not sure what you mean about the installation of software. Naturally you will need to encode your video before it can be streamed, and the client needs some piece of software to play it. That can be software that is already present of course, like Flash or Silverlight.
You might want to look at IIS Smooth Streaming. It should be able to handle HD quality streaming very well (there's a demo on the site).
First, some background:
I'm developing a Silverlight 3 application and want to add support for live streaming (webcam + microphone as input). Unfortunately, Silverlight cannot access a webcam or a microphone itself, so I need to create a stand-alone application for establishing the media stream. I guess Silverlight would work best with Microsoft technology, so I want to use the ASF format with WMV/WMA encoding.
After doing some research, here is what I think I could do:
It seems it is possible to capture both webcam and microphone input with DirectShow and then combine it into one "stream".
To encode the stream, I probably need to pass it to the Windows Media Format SDK libraries (MSDN documentation describes how to use DirectShow with WM ASF Writer).
I think it should be then possible to use something like "Network Sink" to broadcast the ASF stream (without writing it to the HDD).
I guess that connecting lots of clients to the stream would be quite heavy on bandwidth, so I should probably send the stream to a server and broadcast it from there. I just don't know if it's possible to use a combination of ASF Reader/Writer to "pass" the stream through the server. I also don't know if I could use multicasting to achieve a similar result.
I'm planning to use C#, although this probably doesn't make much difference as I will have to use some wrappers for C++ libraries anyway (like DirectShow.Net or SlimDX).
Unfortunately, I have virtually no experience with handling media streams. So my first question is, is it even possible to do streaming in the way I described?
And if it IS possible, is it a sensible way or should I consider using some different libraries/frameworks?
While using DShow and/or WMF SDK will give you the greatest amount of flexibility, if you only goal is to stream video/audio to Silverlight you can use something like Windows Media Encoder 9 or you can use the new Expression Encoder. Both support streaming live webcam and mic to a Windows Media Server publishing point or it can host the stream on a local port. Both have an SDK that is available via .NET (WME uses COM interop and Encoder has a native .NET API) This stream is compatible with Silverlight and Windows Media Player.